Re: [PATCH 2/2] drm/amdgpu: handle AMDGPU_IB_FLAG_RESET_GDS_MAX_WAVE_ID on gfx10

2019-06-28 Thread Marek Olšák
Thanks. I'll push both patches with emit_ib_size updated for this patch.

Marek

On Thu, Jun 27, 2019 at 3:50 AM zhoucm1  wrote:

> any reason for not care .emit_ib_size in this one?
>
> -David
>
>
> On 2019年06月27日 06:35, Marek Olšák wrote:
> > From: Marek Olšák 
> >
> > Signed-off-by: Marek Olšák 
> > ---
> >   drivers/gpu/drm/amd/amdgpu/gfx_v10_0.c | 17 +
> >   1 file changed, 17 insertions(+)
> >
> > diff --git a/drivers/gpu/drm/amd/amdgpu/gfx_v10_0.c
> b/drivers/gpu/drm/amd/amdgpu/gfx_v10_0.c
> > index 6baaa65a1daa..5b807a19bbbf 100644
> > --- a/drivers/gpu/drm/amd/amdgpu/gfx_v10_0.c
> > +++ b/drivers/gpu/drm/amd/amdgpu/gfx_v10_0.c
> > @@ -4257,20 +4257,36 @@ static void gfx_v10_0_ring_emit_ib_gfx(struct
> amdgpu_ring *ring,
> >   }
> >
> >   static void gfx_v10_0_ring_emit_ib_compute(struct amdgpu_ring *ring,
> >  struct amdgpu_job *job,
> >  struct amdgpu_ib *ib,
> >  uint32_t flags)
> >   {
> >   unsigned vmid = AMDGPU_JOB_GET_VMID(job);
> >   u32 control = INDIRECT_BUFFER_VALID | ib->length_dw | (vmid << 24);
> >
> > + /* Currently, there is a high possibility to get wave ID mismatch
> > +  * between ME and GDS, leading to a hw deadlock, because ME
> generates
> > +  * different wave IDs than the GDS expects. This situation happens
> > +  * randomly when at least 5 compute pipes use GDS ordered append.
> > +  * The wave IDs generated by ME are also wrong after
> suspend/resume.
> > +  * Those are probably bugs somewhere else in the kernel driver.
> > +  *
> > +  * Writing GDS_COMPUTE_MAX_WAVE_ID resets wave ID counters in ME
> and
> > +  * GDS to 0 for this ring (me/pipe).
> > +  */
> > + if (ib->flags & AMDGPU_IB_FLAG_RESET_GDS_MAX_WAVE_ID) {
> > + amdgpu_ring_write(ring, PACKET3(PACKET3_SET_CONFIG_REG,
> 1));
> > + amdgpu_ring_write(ring, mmGDS_COMPUTE_MAX_WAVE_ID);
> > + amdgpu_ring_write(ring,
> ring->adev->gds.gds_compute_max_wave_id);
> > + }
> > +
> >   amdgpu_ring_write(ring, PACKET3(PACKET3_INDIRECT_BUFFER, 2));
> >   BUG_ON(ib->gpu_addr & 0x3); /* Dword align */
> >   amdgpu_ring_write(ring,
> >   #ifdef __BIG_ENDIAN
> >   (2 << 0) |
> >   #endif
> >   lower_32_bits(ib->gpu_addr));
> >   amdgpu_ring_write(ring, upper_32_bits(ib->gpu_addr));
> >   amdgpu_ring_write(ring, control);
> >   }
> > @@ -5103,20 +5119,21 @@ static void gfx_v10_0_set_rlc_funcs(struct
> amdgpu_device *adev)
> >   }
> >   }
> >
> >   static void gfx_v10_0_set_gds_init(struct amdgpu_device *adev)
> >   {
> >   /* init asic gds info */
> >   switch (adev->asic_type) {
> >   case CHIP_NAVI10:
> >   default:
> >   adev->gds.gds_size = 0x1;
> > + adev->gds.gds_compute_max_wave_id = 0x4ff;
> >   adev->gds.vgt_gs_max_wave_id = 0x3ff;
> >   break;
> >   }
> >
> >   adev->gds.gws_size = 64;
> >   adev->gds.oa_size = 16;
> >   }
> >
> >   static void gfx_v10_0_set_user_wgp_inactive_bitmap_per_sh(struct
> amdgpu_device *adev,
> > u32 bitmap)
>
>
___
amd-gfx mailing list
amd-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/amd-gfx

Re: [RFC PATCH v3 09/11] drm, cgroup: Add per cgroup bw measure and control

2019-06-28 Thread Kenny Ho
On Thu, Jun 27, 2019 at 2:11 AM Daniel Vetter  wrote:
> I feel like a better approach would by to add a cgroup for the various
> engines on the gpu, and then also account all the sdma (or whatever the
> name of the amd copy engines is again) usage by ttm_bo moves to the right
> cgroup.  I think that's a more meaningful limitation. For direct thrashing
> control I think there's both not enough information available in the
> kernel (you'd need some performance counters to watch how much bandwidth
> userspace batches/CS are wasting), and I don't think the ttm eviction
> logic is ready to step over all the priority inversion issues this will
> bring up. Managing sdma usage otoh will be a lot more straightforward (but
> still has all the priority inversion problems, but in the scheduler that
> might be easier to fix perhaps with the explicit dependency graph - in the
> i915 scheduler we already have priority boosting afaiui).
My concern with hooking into the engine/ lower level is that the
engine may not be process/cgroup aware.  So the bandwidth tracking is
per device.  I am also wondering if this is also potentially be a case
of perfect getting in the way of good.  While ttm_bo_handle_move_mem
may not track everything, it is still a key function for a lot of the
memory operation.  Also, if the programming model is designed to
bypass the kernel then I am not sure if there are anything the kernel
can do.  (Things like kernel-bypass network stack comes to mind.)  All
that said, I will certainly dig deeper into the topic.

Regards,
Kenny
___
amd-gfx mailing list
amd-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/amd-gfx

Re: [RFC PATCH v3 04/11] drm, cgroup: Add total GEM buffer allocation limit

2019-06-28 Thread Kenny Ho
On Thu, Jun 27, 2019 at 5:24 PM Daniel Vetter  wrote:
> On Thu, Jun 27, 2019 at 02:42:43PM -0400, Kenny Ho wrote:
> > Um... I am going to get a bit philosophical here and suggest that the
> > idea of sharing (especially uncontrolled sharing) is inherently at odd
> > with containment.  It's like, if everybody is special, no one is
> > special.  Perhaps an alternative is to make this configurable so that
> > people can allow sharing knowing the caveat?  And just to be clear,
> > the current solution allows for sharing, even between cgroup.
>
> The thing is, why shouldn't we just allow it (with some documented
> caveat)?
>
> I mean if all people do is share it as your current patches allow, then
> there's nothing funny going on (at least if we go with just leaking the
> allocations). If we allow additional sharing, then that's a plus.
Um... perhaps I was being overly conservative :).  So let me
illustrate with an example to add more clarity and get more comments
on it.

Let say we have the following cgroup hierarchy (The letters are
cgroups with R being the root cgroup.  The numbers in brackets are
processes.  The processes are placed with the 'No Internal Process
Constraint' in mind.)
R (4, 5) -- A (6)
  \
B  C (7,8)
 \
   D (9)

Here is a list of operation and the associated effect on the size
track by the cgroups (for simplicity, each buffer is 1 unit in size.)
With current implementation (charge on buffer creation with
restriction on sharing.)
R   A   B   C   D   |Ops

1   0   0   0   0   |4 allocated a buffer
1   0   0   0   0   |4 shared a buffer with 5
1   0   0   0   0   |4 shared a buffer with 9
2   0   1   0   1   |9 allocated a buffer
3   0   2   1   1   |7 allocated a buffer
3   0   2   1   1   |7 shared a buffer with 8
3   0   2   1   1   |7 sharing with 9 (not allowed)
3   0   2   1   1   |7 sharing with 4 (not allowed)
3   0   2   1   1   |7 release a buffer
2   0   1   0   1   |8 release a buffer from 7

The suggestion as I understand it (charge per buffer reference with
unrestricted sharing.)
R   A   B   C   D   |Ops

1   0   0   0   0   |4 allocated a buffer
2   0   0   0   0   |4 shared a buffer with 5
3   0   0   0   1   |4 shared a buffer with 9
4   0   1   0   2   |9 allocated a buffer
5   0   2   1   1   |7 allocated a buffer
6   0   3   2   1   |7 shared a buffer with 8
7   0   4   2   2   |7 sharing with 9
8   0   4   2   2   |7 sharing with 4
7   0   3   1   2   |7 release a buffer
6   0   2   0   2   |8 release a buffer from 7

Is this a correct understanding of the suggestion?

Regards,
Kenny
___
amd-gfx mailing list
amd-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/amd-gfx

Re: [PATCH libdrm 1/9] amdgpu: Pass file descriptor directly to amdgpu_close_kms_handle

2019-06-28 Thread Emil Velikov
On Mon, 24 Jun 2019 at 17:54, Michel Dänzer  wrote:
>
> From: Michel Dänzer 
>
> And propagate drmIoctl's return value.
>
> This allows replacing all remaining open-coded DRM_IOCTL_GEM_CLOSE
> ioctl calls with amdgpu_close_kms_handle calls.
>
> Signed-off-by: Michel Dänzer 
> ---
>  amdgpu/amdgpu_bo.c | 35 +++
>  1 file changed, 15 insertions(+), 20 deletions(-)
>
Fwiw patches 1-3 are:
Reviewed-by: Emil Velikov 

-Emil
___
amd-gfx mailing list
amd-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/amd-gfx

Re: [PATCH v3 00/22] Associate ddc adapters with connectors

2019-06-28 Thread Daniel Vetter
On Fri, Jun 28, 2019 at 06:01:14PM +0200, Andrzej Pietrasiewicz wrote:
> It is difficult for a user to know which of the i2c adapters is for which
> drm connector. This series addresses this problem.
> 
> The idea is to have a symbolic link in connector's sysfs directory, e.g.:
> 
> ls -l /sys/class/drm/card0-HDMI-A-1/ddc
> lrwxrwxrwx 1 root root 0 Jun 24 10:42 /sys/class/drm/card0-HDMI-A-1/ddc \
>   -> ../../../../soc/1388.i2c/i2c-2
> 
> The user then knows that their card0-HDMI-A-1 uses i2c-2 and can e.g. run
> ddcutil:
> 
> ddcutil -b 2 getvcp 0x10
> VCP code 0x10 (Brightness): current value =90, max 
> value =   100
> 
> The first patch in the series adds struct i2c_adapter pointer to struct
> drm_connector. If the field is used by a particular driver, then an
> appropriate symbolic link is created by the generic code, which is also added
> by this patch.
> 
> The second patch is an example of how to convert a driver to this new scheme.
> 
> v1..v2:
> 
> - used fixed name "ddc" for the symbolic link in order to make it easy for
> userspace to find the i2c adapter
> 
> v2..v3:
> 
> - converted as many drivers as possible.
> 
> PATCHES 3/22-22/22 SHOULD BE CONSIDERED RFC!

There's a lot more drivers than this I think (i915 is absent as an
example, but there should be tons more). Why are those not possible?
-Daniel

> 
> Andrzej Pietrasiewicz (22):
>   drm: Include ddc adapter pointer in struct drm_connector
>   drm/exynos: Provide ddc symlink in connector's sysfs
>   drm: rockchip: Provide ddc symlink in rk3066_hdmi sysfs directory
>   drm: rockchip: Provide ddc symlink in inno_hdmi sysfs directory
>   drm/msm/hdmi: Provide ddc symlink in hdmi connector sysfs directory
>   drm/sun4i: hdmi: Provide ddc symlink in sun4i hdmi connector sysfs
> directory
>   drm/mediatek: Provide ddc symlink in hdmi connector sysfs directory
>   drm/tegra: Provide ddc symlink in output connector sysfs directory
>   drm/imx: imx-ldb: Provide ddc symlink in connector's sysfs
>   drm/imx: imx-tve: Provide ddc symlink in connector's sysfs
>   drm/vc4: Provide ddc symlink in connector sysfs directory
>   drm: zte: Provide ddc symlink in hdmi connector sysfs directory
>   drm: zte: Provide ddc symlink in vga connector sysfs directory
>   drm/tilcdc: Provide ddc symlink in connector sysfs directory
>   drm: sti: Provide ddc symlink in hdmi connector sysfs directory
>   drm/mgag200: Provide ddc symlink in connector sysfs directory
>   drm/ast: Provide ddc symlink in connector sysfs directory
>   drm/bridge: dumb-vga-dac: Provide ddc symlink in connector sysfs
> directory
>   drm/bridge: dw-hdmi: Provide ddc symlink in connector sysfs directory
>   drm/bridge: ti-tfp410: Provide ddc symlink in connector sysfs
> directory
>   drm/amdgpu: Provide ddc symlink in connector sysfs directory
>   drm/radeon: Provide ddc symlink in connector sysfs directory
> 
>  .../gpu/drm/amd/amdgpu/amdgpu_connectors.c| 70 +++-
>  drivers/gpu/drm/ast/ast_mode.c|  1 +
>  drivers/gpu/drm/bridge/dumb-vga-dac.c | 19 ++---
>  drivers/gpu/drm/bridge/synopsys/dw-hdmi.c | 40 -
>  drivers/gpu/drm/bridge/ti-tfp410.c| 19 ++---
>  drivers/gpu/drm/drm_sysfs.c   |  7 ++
>  drivers/gpu/drm/exynos/exynos_hdmi.c  | 11 ++-
>  drivers/gpu/drm/imx/imx-ldb.c | 13 ++-
>  drivers/gpu/drm/imx/imx-tve.c |  8 +-
>  drivers/gpu/drm/mediatek/mtk_hdmi.c   |  9 +-
>  drivers/gpu/drm/mgag200/mgag200_mode.c|  1 +
>  drivers/gpu/drm/msm/hdmi/hdmi_connector.c |  1 +
>  drivers/gpu/drm/radeon/radeon_connectors.c| 82 ++-
>  drivers/gpu/drm/rockchip/inno_hdmi.c  | 17 ++--
>  drivers/gpu/drm/rockchip/rk3066_hdmi.c| 17 ++--
>  drivers/gpu/drm/sti/sti_hdmi.c|  1 +
>  drivers/gpu/drm/sun4i/sun4i_hdmi.h|  1 -
>  drivers/gpu/drm/sun4i/sun4i_hdmi_enc.c| 14 ++--
>  drivers/gpu/drm/tegra/drm.h   |  1 -
>  drivers/gpu/drm/tegra/output.c| 12 +--
>  drivers/gpu/drm/tegra/sor.c   |  6 +-
>  drivers/gpu/drm/tilcdc/tilcdc_tfp410.c|  1 +
>  drivers/gpu/drm/vc4/vc4_hdmi.c| 16 ++--
>  drivers/gpu/drm/zte/zx_hdmi.c | 25 ++
>  drivers/gpu/drm/zte/zx_vga.c  | 25 ++
>  include/drm/drm_connector.h   | 11 +++
>  26 files changed, 252 insertions(+), 176 deletions(-)
> 
> -- 
> 2.17.1
> 

-- 
Daniel Vetter
Software Engineer, Intel Corporation
http://blog.ffwll.ch
___
amd-gfx mailing list
amd-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/amd-gfx

Re: [PATCH 2/2] dma-buf: cleanup shared fence removal

2019-06-28 Thread Daniel Vetter
On Fri, Jun 28, 2019 at 5:21 PM Koenig, Christian
 wrote:
>
> Am 28.06.19 um 16:38 schrieb Daniel Vetter:
> [SNIP]
> > - when you submit command buffers, you _dont_ attach fences to all
> > involved buffers
>  That's not going to work because then the memory management then thinks
>  that the buffer is immediately movable, which it isn't,
> >>> I guess we need to fix that then. I pretty much assumed that
> >>> ->notify_move could add whatever fences you might want to add. Which
> >>> would very neatly allow us to solve this problem here, instead of
> >>> coming up with fake fences and fun stuff like that.
> >> Adding the fence later on is not a solution because we need something
> >> which beforehand can check if a buffer is movable or not.
> >>
> >> In the case of a move_notify the decision to move it is already done and
> >> you can't say oh sorry I have to evict my process and reprogram the
> >> hardware or whatever.
> >>
> >> Especially when you do this in an OOM situation.
> > Why? I mean when the fence for a CS is there already, it might also
> > still hang out in the scheduler, or be blocked on a fence from another
> > driver, or anything like that. I don't see a conceptual difference.
> > Plus with dynamic dma-buf the entire point is that an attached fences
> > does _not_ mean the buffer is permanently pinned, but can be moved if
> > you sync correctly. Might need a bit of tuning or a flag to indicate
> > that some buffers should alwasy considered to be busy, and that you
> > shouldn't start evicting those. But that's kinda a detail.
> >
> >  From a very high level there's really no difference betwen
> > ->notify_move and the eviction_fence. Both give you a callback when
> > someone else needs to move the buffer, that's all. The only difference
> > is that the eviction_fence thing jumbles the callback and the fence
> > into one, by preattaching a fence just in case. But again from a
> > conceptual pov it doesn't matter whether the fence is always hanging
> > around, or whether you just attach it when ->notify_move is called.
>
> Sure there is a difference. See when you attach the fence beforehand the
> memory management can know that the buffer is busy.
>
> Just imagine the following: We are in an OOM situation and need to swap
> things out to disk!
>
> When the fence is attached beforehand the handling can be as following:
> 1. MM picks a BO from the LRU and starts to evict it.
> 2. The eviction fence is enabled and we stop the process using this BO.
> 3. As soon as the process is stopped the fence is set into the signaled
> state.
> 4. MM needs to evict more BOs and since the fence for this process is
> now in the signaled state it can intentionally pick the ones up which
> are now idle.
>
> When we attach the fence only on eviction that can't happen and the MM
> would just pick the next random BO and potentially stop another process.
>
> So I think we can summarize that the memory management definitely needs
> to know beforehand how costly it is to evict a BO.
>
> And of course implement this with flags or use counters or whatever, but
> we already have the fence infrastructure and I don't see a reason not to
> use it.

Ok, for the sake of the argument let's buy this.

Why do we need a ->notify_move callback then? We have it already, with
these special fences.

Other side: If all you want to know is whether you can unmap a buffer
immediately, for some short enough value of immediately (I guess a
bunch of pagetable writes should be ok), then why not add that? The "I
don't want to touch all buffers for every CS, but just have a pinned
working set" command submission model is quite different after all,
having dedicated infrastructure that fits well sounds like a good idea
to me.
-Daniel
-- 
Daniel Vetter
Software Engineer, Intel Corporation
+41 (0) 79 365 57 48 - http://blog.ffwll.ch
___
amd-gfx mailing list
amd-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/amd-gfx

[PATCH v3 07/22] drm/mediatek: Provide ddc symlink in hdmi connector sysfs directory

2019-06-28 Thread Andrzej Pietrasiewicz
Use the ddc pointer provided by the generic connector.

Signed-off-by: Andrzej Pietrasiewicz 
---
 drivers/gpu/drm/mediatek/mtk_hdmi.c | 9 -
 1 file changed, 4 insertions(+), 5 deletions(-)

diff --git a/drivers/gpu/drm/mediatek/mtk_hdmi.c 
b/drivers/gpu/drm/mediatek/mtk_hdmi.c
index 5d6a9f094df5..6c5321dcc4b8 100644
--- a/drivers/gpu/drm/mediatek/mtk_hdmi.c
+++ b/drivers/gpu/drm/mediatek/mtk_hdmi.c
@@ -146,7 +146,6 @@ struct mtk_hdmi {
struct device *dev;
struct phy *phy;
struct device *cec_dev;
-   struct i2c_adapter *ddc_adpt;
struct clk *clk[MTK_HDMI_CLK_COUNT];
struct drm_display_mode mode;
bool dvi_mode;
@@ -1213,10 +1212,10 @@ static int mtk_hdmi_conn_get_modes(struct drm_connector 
*conn)
struct edid *edid;
int ret;
 
-   if (!hdmi->ddc_adpt)
+   if (!conn->ddc)
return -ENODEV;
 
-   edid = drm_get_edid(conn, hdmi->ddc_adpt);
+   edid = drm_get_edid(conn, conn->ddc);
if (!edid)
return -ENODEV;
 
@@ -1509,9 +1508,9 @@ static int mtk_hdmi_dt_parse_pdata(struct mtk_hdmi *hdmi,
}
of_node_put(remote);
 
-   hdmi->ddc_adpt = of_find_i2c_adapter_by_node(i2c_np);
+   hdmi->conn.ddc = of_find_i2c_adapter_by_node(i2c_np);
of_node_put(i2c_np);
-   if (!hdmi->ddc_adpt) {
+   if (!hdmi->conn.ddc) {
dev_err(dev, "Failed to get ddc i2c adapter by node\n");
return -EINVAL;
}
-- 
2.17.1

___
amd-gfx mailing list
amd-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/amd-gfx

[PATCH v3 22/22] drm/radeon: Provide ddc symlink in connector sysfs directory

2019-06-28 Thread Andrzej Pietrasiewicz
Use the ddc pointer provided by the generic connector.

Signed-off-by: Andrzej Pietrasiewicz 
---
 drivers/gpu/drm/radeon/radeon_connectors.c | 82 +-
 1 file changed, 63 insertions(+), 19 deletions(-)

diff --git a/drivers/gpu/drm/radeon/radeon_connectors.c 
b/drivers/gpu/drm/radeon/radeon_connectors.c
index c60d1a44d22a..a876e51d275a 100644
--- a/drivers/gpu/drm/radeon/radeon_connectors.c
+++ b/drivers/gpu/drm/radeon/radeon_connectors.c
@@ -1946,11 +1946,15 @@ radeon_add_atom_connector(struct drm_device *dev,
radeon_dig_connector->igp_lane_info = igp_lane_info;
radeon_connector->con_priv = radeon_dig_connector;
if (i2c_bus->valid) {
-   radeon_connector->ddc_bus = radeon_i2c_lookup(rdev, 
i2c_bus);
-   if (radeon_connector->ddc_bus)
+   struct radeon_connector *rcn = radeon_connector;
+
+   rcn->ddc_bus = radeon_i2c_lookup(rdev, i2c_bus);
+   if (rcn->ddc_bus) {
has_aux = true;
-   else
+   connector->ddc = >ddc_bus->adapter;
+   } else {
DRM_ERROR("DP: Failed to assign ddc bus! Check 
dmesg for i2c errors.\n");
+   }
}
switch (connector_type) {
case DRM_MODE_CONNECTOR_VGA:
@@ -2045,9 +2049,13 @@ radeon_add_atom_connector(struct drm_device *dev,
drm_connector_init(dev, _connector->base, 
_vga_connector_funcs, connector_type);
drm_connector_helper_add(_connector->base, 
_vga_connector_helper_funcs);
if (i2c_bus->valid) {
-   radeon_connector->ddc_bus = 
radeon_i2c_lookup(rdev, i2c_bus);
-   if (!radeon_connector->ddc_bus)
+   struct radeon_connector *rcn = radeon_connector;
+
+   rcn->ddc_bus = radeon_i2c_lookup(rdev, i2c_bus);
+   if (!rcn->ddc_bus)
DRM_ERROR("VGA: Failed to assign ddc 
bus! Check dmesg for i2c errors.\n");
+   else
+   connector->ddc = >ddc_bus->adapter;
}
radeon_connector->dac_load_detect = true;
drm_object_attach_property(_connector->base.base,
@@ -2070,9 +2078,13 @@ radeon_add_atom_connector(struct drm_device *dev,
drm_connector_init(dev, _connector->base, 
_vga_connector_funcs, connector_type);
drm_connector_helper_add(_connector->base, 
_vga_connector_helper_funcs);
if (i2c_bus->valid) {
-   radeon_connector->ddc_bus = 
radeon_i2c_lookup(rdev, i2c_bus);
-   if (!radeon_connector->ddc_bus)
+   struct radeon_connector *rcn = radeon_connector;
+
+   rcn->ddc_bus = radeon_i2c_lookup(rdev, i2c_bus);
+   if (!rcn->ddc_bus)
DRM_ERROR("DVIA: Failed to assign ddc 
bus! Check dmesg for i2c errors.\n");
+   else
+   connector->ddc = >ddc_bus->adapter;
}
radeon_connector->dac_load_detect = true;
drm_object_attach_property(_connector->base.base,
@@ -2101,9 +2113,13 @@ radeon_add_atom_connector(struct drm_device *dev,
drm_connector_init(dev, _connector->base, 
_dvi_connector_funcs, connector_type);
drm_connector_helper_add(_connector->base, 
_dvi_connector_helper_funcs);
if (i2c_bus->valid) {
-   radeon_connector->ddc_bus = 
radeon_i2c_lookup(rdev, i2c_bus);
-   if (!radeon_connector->ddc_bus)
+   struct radeon_connector *rcn = radeon_connector;
+
+   rcn->ddc_bus = radeon_i2c_lookup(rdev, i2c_bus);
+   if (!rcn->ddc_bus)
DRM_ERROR("DVI: Failed to assign ddc 
bus! Check dmesg for i2c errors.\n");
+   else
+   connector->ddc = >ddc_bus->adapter;
}
subpixel_order = SubPixelHorizontalRGB;
drm_object_attach_property(_connector->base.base,
@@ -2158,9 +2174,13 @@ radeon_add_atom_connector(struct drm_device *dev,
drm_connector_init(dev, _connector->base, 
_dvi_connector_funcs, connector_type);
drm_connector_helper_add(_connector->base, 

[PATCH v3 05/22] drm/msm/hdmi: Provide ddc symlink in hdmi connector sysfs directory

2019-06-28 Thread Andrzej Pietrasiewicz
Use the ddc pointer provided by the generic connector.

Signed-off-by: Andrzej Pietrasiewicz 
---
 drivers/gpu/drm/msm/hdmi/hdmi_connector.c | 1 +
 1 file changed, 1 insertion(+)

diff --git a/drivers/gpu/drm/msm/hdmi/hdmi_connector.c 
b/drivers/gpu/drm/msm/hdmi/hdmi_connector.c
index 07b4cb877d82..4979e3362687 100644
--- a/drivers/gpu/drm/msm/hdmi/hdmi_connector.c
+++ b/drivers/gpu/drm/msm/hdmi/hdmi_connector.c
@@ -461,6 +461,7 @@ struct drm_connector *msm_hdmi_connector_init(struct hdmi 
*hdmi)
connector->doublescan_allowed = 0;
 
drm_connector_attach_encoder(connector, hdmi->encoder);
+   connector->ddc = hdmi->i2c;
 
return connector;
 }
-- 
2.17.1

___
amd-gfx mailing list
amd-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/amd-gfx

[PATCH v3 12/22] drm: zte: Provide ddc symlink in hdmi connector sysfs directory

2019-06-28 Thread Andrzej Pietrasiewicz
Use the ddc pointer provided by the generic connector.

Signed-off-by: Andrzej Pietrasiewicz 
---
 drivers/gpu/drm/zte/zx_hdmi.c | 25 +
 1 file changed, 9 insertions(+), 16 deletions(-)

diff --git a/drivers/gpu/drm/zte/zx_hdmi.c b/drivers/gpu/drm/zte/zx_hdmi.c
index bfe918b27c5c..862a855ea14a 100644
--- a/drivers/gpu/drm/zte/zx_hdmi.c
+++ b/drivers/gpu/drm/zte/zx_hdmi.c
@@ -29,15 +29,11 @@
 #define ZX_HDMI_INFOFRAME_SIZE 31
 #define DDC_SEGMENT_ADDR   0x30
 
-struct zx_hdmi_i2c {
-   struct i2c_adapter adap;
-   struct mutex lock;
-};
-
 struct zx_hdmi {
struct drm_connector connector;
struct drm_encoder encoder;
-   struct zx_hdmi_i2c *ddc;
+   /* protects ddc access */
+   struct mutex ddc_lock;
struct device *dev;
struct drm_device *drm;
void __iomem *mmio;
@@ -264,7 +260,7 @@ static int zx_hdmi_connector_get_modes(struct drm_connector 
*connector)
struct edid *edid;
int ret;
 
-   edid = drm_get_edid(connector, >ddc->adap);
+   edid = drm_get_edid(connector, connector->ddc);
if (!edid)
return 0;
 
@@ -562,10 +558,9 @@ static int zx_hdmi_i2c_xfer(struct i2c_adapter *adap, 
struct i2c_msg *msgs,
int num)
 {
struct zx_hdmi *hdmi = i2c_get_adapdata(adap);
-   struct zx_hdmi_i2c *ddc = hdmi->ddc;
int i, ret = 0;
 
-   mutex_lock(>lock);
+   mutex_lock(>ddc_lock);
 
/* Enable DDC master access */
hdmi_writeb_mask(hdmi, TPI_DDC_MASTER_EN, HW_DDC_MASTER, HW_DDC_MASTER);
@@ -590,7 +585,7 @@ static int zx_hdmi_i2c_xfer(struct i2c_adapter *adap, 
struct i2c_msg *msgs,
/* Disable DDC master access */
hdmi_writeb_mask(hdmi, TPI_DDC_MASTER_EN, HW_DDC_MASTER, 0);
 
-   mutex_unlock(>lock);
+   mutex_unlock(>ddc_lock);
 
return ret;
 }
@@ -608,17 +603,15 @@ static const struct i2c_algorithm zx_hdmi_algorithm = {
 static int zx_hdmi_ddc_register(struct zx_hdmi *hdmi)
 {
struct i2c_adapter *adap;
-   struct zx_hdmi_i2c *ddc;
int ret;
 
-   ddc = devm_kzalloc(hdmi->dev, sizeof(*ddc), GFP_KERNEL);
-   if (!ddc)
+   adap = devm_kzalloc(hdmi->dev, sizeof(*adap), GFP_KERNEL);
+   if (!adap)
return -ENOMEM;
 
-   hdmi->ddc = ddc;
-   mutex_init(>lock);
+   hdmi->connector.ddc = adap;
+   mutex_init(>ddc_lock);
 
-   adap = >adap;
adap->owner = THIS_MODULE;
adap->class = I2C_CLASS_DDC;
adap->dev.parent = hdmi->dev;
-- 
2.17.1

___
amd-gfx mailing list
amd-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/amd-gfx

[PATCH v3 18/22] drm/bridge: dumb-vga-dac: Provide ddc symlink in connector sysfs directory

2019-06-28 Thread Andrzej Pietrasiewicz
Use the ddc pointer provided by the generic connector.

Signed-off-by: Andrzej Pietrasiewicz 
---
 drivers/gpu/drm/bridge/dumb-vga-dac.c | 19 +--
 1 file changed, 9 insertions(+), 10 deletions(-)

diff --git a/drivers/gpu/drm/bridge/dumb-vga-dac.c 
b/drivers/gpu/drm/bridge/dumb-vga-dac.c
index d32885b906ae..b4cc3238400a 100644
--- a/drivers/gpu/drm/bridge/dumb-vga-dac.c
+++ b/drivers/gpu/drm/bridge/dumb-vga-dac.c
@@ -20,7 +20,6 @@ struct dumb_vga {
struct drm_bridge   bridge;
struct drm_connectorconnector;
 
-   struct i2c_adapter  *ddc;
struct regulator*vdd;
 };
 
@@ -42,10 +41,10 @@ static int dumb_vga_get_modes(struct drm_connector 
*connector)
struct edid *edid;
int ret;
 
-   if (IS_ERR(vga->ddc))
+   if (IS_ERR(vga->connector.ddc))
goto fallback;
 
-   edid = drm_get_edid(connector, vga->ddc);
+   edid = drm_get_edid(connector, vga->connector.ddc);
if (!edid) {
DRM_INFO("EDID readout failed, falling back to standard 
modes\n");
goto fallback;
@@ -84,7 +83,7 @@ dumb_vga_connector_detect(struct drm_connector *connector, 
bool force)
 * wire the DDC pins, or the I2C bus might not be working at
 * all.
 */
-   if (!IS_ERR(vga->ddc) && drm_probe_ddc(vga->ddc))
+   if (!IS_ERR(vga->connector.ddc) && drm_probe_ddc(vga->connector.ddc))
return connector_status_connected;
 
return connector_status_unknown;
@@ -190,14 +189,14 @@ static int dumb_vga_probe(struct platform_device *pdev)
dev_dbg(>dev, "No vdd regulator found: %d\n", ret);
}
 
-   vga->ddc = dumb_vga_retrieve_ddc(>dev);
-   if (IS_ERR(vga->ddc)) {
-   if (PTR_ERR(vga->ddc) == -ENODEV) {
+   vga->connector.ddc = dumb_vga_retrieve_ddc(>dev);
+   if (IS_ERR(vga->connector.ddc)) {
+   if (PTR_ERR(vga->connector.ddc) == -ENODEV) {
dev_dbg(>dev,
"No i2c bus specified. Disabling EDID 
readout\n");
} else {
dev_err(>dev, "Couldn't retrieve i2c bus\n");
-   return PTR_ERR(vga->ddc);
+   return PTR_ERR(vga->connector.ddc);
}
}
 
@@ -216,8 +215,8 @@ static int dumb_vga_remove(struct platform_device *pdev)
 
drm_bridge_remove(>bridge);
 
-   if (!IS_ERR(vga->ddc))
-   i2c_put_adapter(vga->ddc);
+   if (!IS_ERR(vga->connector.ddc))
+   i2c_put_adapter(vga->connector.ddc);
 
return 0;
 }
-- 
2.17.1

___
amd-gfx mailing list
amd-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/amd-gfx

[PATCH v3 00/22] Associate ddc adapters with connectors

2019-06-28 Thread Andrzej Pietrasiewicz
It is difficult for a user to know which of the i2c adapters is for which
drm connector. This series addresses this problem.

The idea is to have a symbolic link in connector's sysfs directory, e.g.:

ls -l /sys/class/drm/card0-HDMI-A-1/ddc
lrwxrwxrwx 1 root root 0 Jun 24 10:42 /sys/class/drm/card0-HDMI-A-1/ddc \
-> ../../../../soc/1388.i2c/i2c-2

The user then knows that their card0-HDMI-A-1 uses i2c-2 and can e.g. run
ddcutil:

ddcutil -b 2 getvcp 0x10
VCP code 0x10 (Brightness): current value =90, max 
value =   100

The first patch in the series adds struct i2c_adapter pointer to struct
drm_connector. If the field is used by a particular driver, then an
appropriate symbolic link is created by the generic code, which is also added
by this patch.

The second patch is an example of how to convert a driver to this new scheme.

v1..v2:

- used fixed name "ddc" for the symbolic link in order to make it easy for
userspace to find the i2c adapter

v2..v3:

- converted as many drivers as possible.

PATCHES 3/22-22/22 SHOULD BE CONSIDERED RFC!

Andrzej Pietrasiewicz (22):
  drm: Include ddc adapter pointer in struct drm_connector
  drm/exynos: Provide ddc symlink in connector's sysfs
  drm: rockchip: Provide ddc symlink in rk3066_hdmi sysfs directory
  drm: rockchip: Provide ddc symlink in inno_hdmi sysfs directory
  drm/msm/hdmi: Provide ddc symlink in hdmi connector sysfs directory
  drm/sun4i: hdmi: Provide ddc symlink in sun4i hdmi connector sysfs
directory
  drm/mediatek: Provide ddc symlink in hdmi connector sysfs directory
  drm/tegra: Provide ddc symlink in output connector sysfs directory
  drm/imx: imx-ldb: Provide ddc symlink in connector's sysfs
  drm/imx: imx-tve: Provide ddc symlink in connector's sysfs
  drm/vc4: Provide ddc symlink in connector sysfs directory
  drm: zte: Provide ddc symlink in hdmi connector sysfs directory
  drm: zte: Provide ddc symlink in vga connector sysfs directory
  drm/tilcdc: Provide ddc symlink in connector sysfs directory
  drm: sti: Provide ddc symlink in hdmi connector sysfs directory
  drm/mgag200: Provide ddc symlink in connector sysfs directory
  drm/ast: Provide ddc symlink in connector sysfs directory
  drm/bridge: dumb-vga-dac: Provide ddc symlink in connector sysfs
directory
  drm/bridge: dw-hdmi: Provide ddc symlink in connector sysfs directory
  drm/bridge: ti-tfp410: Provide ddc symlink in connector sysfs
directory
  drm/amdgpu: Provide ddc symlink in connector sysfs directory
  drm/radeon: Provide ddc symlink in connector sysfs directory

 .../gpu/drm/amd/amdgpu/amdgpu_connectors.c| 70 +++-
 drivers/gpu/drm/ast/ast_mode.c|  1 +
 drivers/gpu/drm/bridge/dumb-vga-dac.c | 19 ++---
 drivers/gpu/drm/bridge/synopsys/dw-hdmi.c | 40 -
 drivers/gpu/drm/bridge/ti-tfp410.c| 19 ++---
 drivers/gpu/drm/drm_sysfs.c   |  7 ++
 drivers/gpu/drm/exynos/exynos_hdmi.c  | 11 ++-
 drivers/gpu/drm/imx/imx-ldb.c | 13 ++-
 drivers/gpu/drm/imx/imx-tve.c |  8 +-
 drivers/gpu/drm/mediatek/mtk_hdmi.c   |  9 +-
 drivers/gpu/drm/mgag200/mgag200_mode.c|  1 +
 drivers/gpu/drm/msm/hdmi/hdmi_connector.c |  1 +
 drivers/gpu/drm/radeon/radeon_connectors.c| 82 ++-
 drivers/gpu/drm/rockchip/inno_hdmi.c  | 17 ++--
 drivers/gpu/drm/rockchip/rk3066_hdmi.c| 17 ++--
 drivers/gpu/drm/sti/sti_hdmi.c|  1 +
 drivers/gpu/drm/sun4i/sun4i_hdmi.h|  1 -
 drivers/gpu/drm/sun4i/sun4i_hdmi_enc.c| 14 ++--
 drivers/gpu/drm/tegra/drm.h   |  1 -
 drivers/gpu/drm/tegra/output.c| 12 +--
 drivers/gpu/drm/tegra/sor.c   |  6 +-
 drivers/gpu/drm/tilcdc/tilcdc_tfp410.c|  1 +
 drivers/gpu/drm/vc4/vc4_hdmi.c| 16 ++--
 drivers/gpu/drm/zte/zx_hdmi.c | 25 ++
 drivers/gpu/drm/zte/zx_vga.c  | 25 ++
 include/drm/drm_connector.h   | 11 +++
 26 files changed, 252 insertions(+), 176 deletions(-)

-- 
2.17.1

___
amd-gfx mailing list
amd-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/amd-gfx

[PATCH v3 06/22] drm/sun4i: hdmi: Provide ddc symlink in sun4i hdmi connector sysfs directory

2019-06-28 Thread Andrzej Pietrasiewicz
Use the ddc pointer provided by the generic connector.

Signed-off-by: Andrzej Pietrasiewicz 
---
 drivers/gpu/drm/sun4i/sun4i_hdmi.h |  1 -
 drivers/gpu/drm/sun4i/sun4i_hdmi_enc.c | 14 +++---
 2 files changed, 7 insertions(+), 8 deletions(-)

diff --git a/drivers/gpu/drm/sun4i/sun4i_hdmi.h 
b/drivers/gpu/drm/sun4i/sun4i_hdmi.h
index 7ad3f06c127e..1649273b1493 100644
--- a/drivers/gpu/drm/sun4i/sun4i_hdmi.h
+++ b/drivers/gpu/drm/sun4i/sun4i_hdmi.h
@@ -265,7 +265,6 @@ struct sun4i_hdmi {
struct clk  *tmds_clk;
 
struct i2c_adapter  *i2c;
-   struct i2c_adapter  *ddc_i2c;
 
/* Regmap fields for I2C adapter */
struct regmap_field *field_ddc_en;
diff --git a/drivers/gpu/drm/sun4i/sun4i_hdmi_enc.c 
b/drivers/gpu/drm/sun4i/sun4i_hdmi_enc.c
index 9c3f99339b82..250bec00dc35 100644
--- a/drivers/gpu/drm/sun4i/sun4i_hdmi_enc.c
+++ b/drivers/gpu/drm/sun4i/sun4i_hdmi_enc.c
@@ -213,7 +213,7 @@ static int sun4i_hdmi_get_modes(struct drm_connector 
*connector)
struct edid *edid;
int ret;
 
-   edid = drm_get_edid(connector, hdmi->ddc_i2c ?: hdmi->i2c);
+   edid = drm_get_edid(connector, connector->ddc ?: hdmi->i2c);
if (!edid)
return 0;
 
@@ -598,11 +598,11 @@ static int sun4i_hdmi_bind(struct device *dev, struct 
device *master,
goto err_disable_mod_clk;
}
 
-   hdmi->ddc_i2c = sun4i_hdmi_get_ddc(dev);
-   if (IS_ERR(hdmi->ddc_i2c)) {
-   ret = PTR_ERR(hdmi->ddc_i2c);
+   hdmi->connector.ddc = sun4i_hdmi_get_ddc(dev);
+   if (IS_ERR(hdmi->connector.ddc)) {
+   ret = PTR_ERR(hdmi->connector.ddc);
if (ret == -ENODEV)
-   hdmi->ddc_i2c = NULL;
+   hdmi->connector.ddc = NULL;
else
goto err_del_i2c_adapter;
}
@@ -663,7 +663,7 @@ static int sun4i_hdmi_bind(struct device *dev, struct 
device *master,
cec_delete_adapter(hdmi->cec_adap);
drm_encoder_cleanup(>encoder);
 err_put_ddc_i2c:
-   i2c_put_adapter(hdmi->ddc_i2c);
+   i2c_put_adapter(hdmi->connector.ddc);
 err_del_i2c_adapter:
i2c_del_adapter(hdmi->i2c);
 err_disable_mod_clk:
@@ -684,7 +684,7 @@ static void sun4i_hdmi_unbind(struct device *dev, struct 
device *master,
drm_connector_cleanup(>connector);
drm_encoder_cleanup(>encoder);
i2c_del_adapter(hdmi->i2c);
-   i2c_put_adapter(hdmi->ddc_i2c);
+   i2c_put_adapter(hdmi->connector.ddc);
clk_disable_unprepare(hdmi->mod_clk);
clk_disable_unprepare(hdmi->bus_clk);
 }
-- 
2.17.1

___
amd-gfx mailing list
amd-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/amd-gfx

[PATCH v3 11/22] drm/vc4: Provide ddc symlink in connector sysfs directory

2019-06-28 Thread Andrzej Pietrasiewicz
Use the ddc pointer provided by the generic connector.

Signed-off-by: Andrzej Pietrasiewicz 
---
 drivers/gpu/drm/vc4/vc4_hdmi.c | 16 +---
 1 file changed, 9 insertions(+), 7 deletions(-)

diff --git a/drivers/gpu/drm/vc4/vc4_hdmi.c b/drivers/gpu/drm/vc4/vc4_hdmi.c
index 87ad0879edf3..b46df3aa1798 100644
--- a/drivers/gpu/drm/vc4/vc4_hdmi.c
+++ b/drivers/gpu/drm/vc4/vc4_hdmi.c
@@ -75,7 +75,6 @@ struct vc4_hdmi {
 
struct vc4_hdmi_audio audio;
 
-   struct i2c_adapter *ddc;
void __iomem *hdmicore_regs;
void __iomem *hd_regs;
int hpd_gpio;
@@ -206,7 +205,7 @@ vc4_hdmi_connector_detect(struct drm_connector *connector, 
bool force)
return connector_status_disconnected;
}
 
-   if (drm_probe_ddc(vc4->hdmi->ddc))
+   if (drm_probe_ddc(connector->ddc))
return connector_status_connected;
 
if (HDMI_READ(VC4_HDMI_HOTPLUG) & VC4_HDMI_HOTPLUG_CONNECTED)
@@ -232,7 +231,7 @@ static int vc4_hdmi_connector_get_modes(struct 
drm_connector *connector)
int ret = 0;
struct edid *edid;
 
-   edid = drm_get_edid(connector, vc4->hdmi->ddc);
+   edid = drm_get_edid(connector, connector->ddc);
cec_s_phys_addr_from_edid(vc4->hdmi->cec_adap, edid);
if (!edid)
return -ENODEV;
@@ -1287,6 +1286,7 @@ static int vc4_hdmi_bind(struct device *dev, struct 
device *master, void *data)
struct vc4_hdmi *hdmi;
struct vc4_hdmi_encoder *vc4_hdmi_encoder;
struct device_node *ddc_node;
+   struct i2c_adapter *ddc;
u32 value;
int ret;
 
@@ -1334,9 +1334,9 @@ static int vc4_hdmi_bind(struct device *dev, struct 
device *master, void *data)
return -ENODEV;
}
 
-   hdmi->ddc = of_find_i2c_adapter_by_node(ddc_node);
+   ddc = of_find_i2c_adapter_by_node(ddc_node);
of_node_put(ddc_node);
-   if (!hdmi->ddc) {
+   if (ddc) {
DRM_DEBUG("Failed to get ddc i2c adapter by node\n");
return -EPROBE_DEFER;
}
@@ -1396,6 +1396,7 @@ static int vc4_hdmi_bind(struct device *dev, struct 
device *master, void *data)
ret = PTR_ERR(hdmi->connector);
goto err_destroy_encoder;
}
+   hdmi->connector->ddc = ddc;
 #ifdef CONFIG_DRM_VC4_HDMI_CEC
hdmi->cec_adap = cec_allocate_adapter(_hdmi_cec_adap_ops,
  vc4, "vc4",
@@ -1448,7 +1449,7 @@ static int vc4_hdmi_bind(struct device *dev, struct 
device *master, void *data)
clk_disable_unprepare(hdmi->hsm_clock);
pm_runtime_disable(dev);
 err_put_i2c:
-   put_device(>ddc->dev);
+   put_device(>dev);
 
return ret;
 }
@@ -1459,6 +1460,7 @@ static void vc4_hdmi_unbind(struct device *dev, struct 
device *master,
struct drm_device *drm = dev_get_drvdata(master);
struct vc4_dev *vc4 = drm->dev_private;
struct vc4_hdmi *hdmi = vc4->hdmi;
+   struct i2c_adapter *ddc = hdmi->connector->ddc;
 
cec_unregister_adapter(hdmi->cec_adap);
vc4_hdmi_connector_destroy(hdmi->connector);
@@ -1467,7 +1469,7 @@ static void vc4_hdmi_unbind(struct device *dev, struct 
device *master,
clk_disable_unprepare(hdmi->hsm_clock);
pm_runtime_disable(dev);
 
-   put_device(>ddc->dev);
+   put_device(>dev);
 
vc4->hdmi = NULL;
 }
-- 
2.17.1

___
amd-gfx mailing list
amd-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/amd-gfx

[PATCH v3 09/22] drm/imx: imx-ldb: Provide ddc symlink in connector's sysfs

2019-06-28 Thread Andrzej Pietrasiewicz
Use the ddc pointer provided by the generic connector.

Signed-off-by: Andrzej Pietrasiewicz 
---
 drivers/gpu/drm/imx/imx-ldb.c | 13 ++---
 1 file changed, 6 insertions(+), 7 deletions(-)

diff --git a/drivers/gpu/drm/imx/imx-ldb.c b/drivers/gpu/drm/imx/imx-ldb.c
index 383733302280..44fdb264339e 100644
--- a/drivers/gpu/drm/imx/imx-ldb.c
+++ b/drivers/gpu/drm/imx/imx-ldb.c
@@ -55,7 +55,6 @@ struct imx_ldb_channel {
struct drm_bridge *bridge;
 
struct device_node *child;
-   struct i2c_adapter *ddc;
int chno;
void *edid;
int edid_len;
@@ -131,8 +130,8 @@ static int imx_ldb_connector_get_modes(struct drm_connector 
*connector)
return num_modes;
}
 
-   if (!imx_ldb_ch->edid && imx_ldb_ch->ddc)
-   imx_ldb_ch->edid = drm_get_edid(connector, imx_ldb_ch->ddc);
+   if (!imx_ldb_ch->edid && connector->ddc)
+   imx_ldb_ch->edid = drm_get_edid(connector, connector->ddc);
 
if (imx_ldb_ch->edid) {
drm_connector_update_edid_property(connector,
@@ -550,15 +549,15 @@ static int imx_ldb_panel_ddc(struct device *dev,
 
ddc_node = of_parse_phandle(child, "ddc-i2c-bus", 0);
if (ddc_node) {
-   channel->ddc = of_find_i2c_adapter_by_node(ddc_node);
+   channel->connector.ddc = of_find_i2c_adapter_by_node(ddc_node);
of_node_put(ddc_node);
-   if (!channel->ddc) {
+   if (!channel->connector.ddc) {
dev_warn(dev, "failed to get ddc i2c adapter\n");
return -EPROBE_DEFER;
}
}
 
-   if (!channel->ddc) {
+   if (!channel->connector.ddc) {
/* if no DDC available, fallback to hardcoded EDID */
dev_dbg(dev, "no ddc available\n");
 
@@ -725,7 +724,7 @@ static void imx_ldb_unbind(struct device *dev, struct 
device *master,
drm_panel_detach(channel->panel);
 
kfree(channel->edid);
-   i2c_put_adapter(channel->ddc);
+   i2c_put_adapter(channel->connector.ddc);
}
 }
 
-- 
2.17.1

___
amd-gfx mailing list
amd-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/amd-gfx

[PATCH v3 20/22] drm/bridge: ti-tfp410: Provide ddc symlink in connector sysfs directory

2019-06-28 Thread Andrzej Pietrasiewicz
Use the ddc pointer provided by the generic connector.

Signed-off-by: Andrzej Pietrasiewicz 
---
 drivers/gpu/drm/bridge/ti-tfp410.c | 19 +--
 1 file changed, 9 insertions(+), 10 deletions(-)

diff --git a/drivers/gpu/drm/bridge/ti-tfp410.c 
b/drivers/gpu/drm/bridge/ti-tfp410.c
index dbf35c7bc85e..e55358f0a5ba 100644
--- a/drivers/gpu/drm/bridge/ti-tfp410.c
+++ b/drivers/gpu/drm/bridge/ti-tfp410.c
@@ -26,7 +26,6 @@ struct tfp410 {
unsigned intconnector_type;
 
u32 bus_format;
-   struct i2c_adapter  *ddc;
struct gpio_desc*hpd;
int hpd_irq;
struct delayed_work hpd_work;
@@ -55,10 +54,10 @@ static int tfp410_get_modes(struct drm_connector *connector)
struct edid *edid;
int ret;
 
-   if (!dvi->ddc)
+   if (!dvi->connector.ddc)
goto fallback;
 
-   edid = drm_get_edid(connector, dvi->ddc);
+   edid = drm_get_edid(connector, dvi->connector.ddc);
if (!edid) {
DRM_INFO("EDID read failed. Fallback to standard modes\n");
goto fallback;
@@ -98,8 +97,8 @@ tfp410_connector_detect(struct drm_connector *connector, bool 
force)
return connector_status_disconnected;
}
 
-   if (dvi->ddc) {
-   if (drm_probe_ddc(dvi->ddc))
+   if (dvi->connector.ddc) {
+   if (drm_probe_ddc(dvi->connector.ddc))
return connector_status_connected;
else
return connector_status_disconnected;
@@ -297,8 +296,8 @@ static int tfp410_get_connector_properties(struct tfp410 
*dvi)
if (!ddc_phandle)
goto fail;
 
-   dvi->ddc = of_get_i2c_adapter_by_node(ddc_phandle);
-   if (dvi->ddc)
+   dvi->connector.ddc = of_get_i2c_adapter_by_node(ddc_phandle);
+   if (dvi->connector.ddc)
dev_info(dvi->dev, "Connector's ddc i2c bus found\n");
else
ret = -EPROBE_DEFER;
@@ -367,7 +366,7 @@ static int tfp410_init(struct device *dev, bool i2c)
 
return 0;
 fail:
-   i2c_put_adapter(dvi->ddc);
+   i2c_put_adapter(dvi->connector.ddc);
if (dvi->hpd)
gpiod_put(dvi->hpd);
return ret;
@@ -382,8 +381,8 @@ static int tfp410_fini(struct device *dev)
 
drm_bridge_remove(>bridge);
 
-   if (dvi->ddc)
-   i2c_put_adapter(dvi->ddc);
+   if (dvi->connector.ddc)
+   i2c_put_adapter(dvi->connector.ddc);
if (dvi->hpd)
gpiod_put(dvi->hpd);
 
-- 
2.17.1

___
amd-gfx mailing list
amd-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/amd-gfx

[PATCH v3 03/22] drm: rockchip: Provide ddc symlink in rk3066_hdmi sysfs directory

2019-06-28 Thread Andrzej Pietrasiewicz
Use the ddc pointer provided by the generic connector.

Signed-off-by: Andrzej Pietrasiewicz 
---
 drivers/gpu/drm/rockchip/rk3066_hdmi.c | 17 -
 1 file changed, 8 insertions(+), 9 deletions(-)

diff --git a/drivers/gpu/drm/rockchip/rk3066_hdmi.c 
b/drivers/gpu/drm/rockchip/rk3066_hdmi.c
index 85fc5f01f761..1f3e630ecdab 100644
--- a/drivers/gpu/drm/rockchip/rk3066_hdmi.c
+++ b/drivers/gpu/drm/rockchip/rk3066_hdmi.c
@@ -49,7 +49,6 @@ struct rk3066_hdmi {
struct drm_encoder encoder;
 
struct rk3066_hdmi_i2c *i2c;
-   struct i2c_adapter *ddc;
 
unsigned int tmdsclk;
 
@@ -470,10 +469,10 @@ static int rk3066_hdmi_connector_get_modes(struct 
drm_connector *connector)
struct edid *edid;
int ret = 0;
 
-   if (!hdmi->ddc)
+   if (!connector->ddc)
return 0;
 
-   edid = drm_get_edid(connector, hdmi->ddc);
+   edid = drm_get_edid(connector, connector->ddc);
if (edid) {
hdmi->hdmi_data.sink_is_hdmi = drm_detect_hdmi_monitor(edid);
drm_connector_update_edid_property(connector, edid);
@@ -789,10 +788,10 @@ static int rk3066_hdmi_bind(struct device *dev, struct 
device *master,
/* internal hclk = hdmi_hclk / 25 */
hdmi_writeb(hdmi, HDMI_INTERNAL_CLK_DIVIDER, 25);
 
-   hdmi->ddc = rk3066_hdmi_i2c_adapter(hdmi);
-   if (IS_ERR(hdmi->ddc)) {
-   ret = PTR_ERR(hdmi->ddc);
-   hdmi->ddc = NULL;
+   hdmi->connector.ddc = rk3066_hdmi_i2c_adapter(hdmi);
+   if (IS_ERR(hdmi->connector.ddc)) {
+   ret = PTR_ERR(hdmi->connector.ddc);
+   hdmi->connector.ddc = NULL;
goto err_disable_hclk;
}
 
@@ -824,7 +823,7 @@ static int rk3066_hdmi_bind(struct device *dev, struct 
device *master,
hdmi->connector.funcs->destroy(>connector);
hdmi->encoder.funcs->destroy(>encoder);
 err_disable_i2c:
-   i2c_put_adapter(hdmi->ddc);
+   i2c_put_adapter(hdmi->connector.ddc);
 err_disable_hclk:
clk_disable_unprepare(hdmi->hclk);
 
@@ -839,7 +838,7 @@ static void rk3066_hdmi_unbind(struct device *dev, struct 
device *master,
hdmi->connector.funcs->destroy(>connector);
hdmi->encoder.funcs->destroy(>encoder);
 
-   i2c_put_adapter(hdmi->ddc);
+   i2c_put_adapter(hdmi->connector.ddc);
clk_disable_unprepare(hdmi->hclk);
 }
 
-- 
2.17.1

___
amd-gfx mailing list
amd-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/amd-gfx

[PATCH v3 08/22] drm/tegra: Provide ddc symlink in output connector sysfs directory

2019-06-28 Thread Andrzej Pietrasiewicz
Use the ddc pointer provided by the generic connector.

Signed-off-by: Andrzej Pietrasiewicz 
---
 drivers/gpu/drm/tegra/drm.h|  1 -
 drivers/gpu/drm/tegra/output.c | 12 ++--
 drivers/gpu/drm/tegra/sor.c|  6 +++---
 3 files changed, 9 insertions(+), 10 deletions(-)

diff --git a/drivers/gpu/drm/tegra/drm.h b/drivers/gpu/drm/tegra/drm.h
index 86daa19fcf24..9bf72bcd3ec1 100644
--- a/drivers/gpu/drm/tegra/drm.h
+++ b/drivers/gpu/drm/tegra/drm.h
@@ -120,7 +120,6 @@ struct tegra_output {
struct device *dev;
 
struct drm_panel *panel;
-   struct i2c_adapter *ddc;
const struct edid *edid;
struct cec_notifier *cec;
unsigned int hpd_irq;
diff --git a/drivers/gpu/drm/tegra/output.c b/drivers/gpu/drm/tegra/output.c
index 274cb955e2e1..0b5037a29c63 100644
--- a/drivers/gpu/drm/tegra/output.c
+++ b/drivers/gpu/drm/tegra/output.c
@@ -30,8 +30,8 @@ int tegra_output_connector_get_modes(struct drm_connector 
*connector)
 
if (output->edid)
edid = kmemdup(output->edid, sizeof(*edid), GFP_KERNEL);
-   else if (output->ddc)
-   edid = drm_get_edid(connector, output->ddc);
+   else if (connector->ddc)
+   edid = drm_get_edid(connector, connector->ddc);
 
cec_notifier_set_phys_addr_from_edid(output->cec, edid);
drm_connector_update_edid_property(connector, edid);
@@ -111,8 +111,8 @@ int tegra_output_probe(struct tegra_output *output)
 
ddc = of_parse_phandle(output->of_node, "nvidia,ddc-i2c-bus", 0);
if (ddc) {
-   output->ddc = of_find_i2c_adapter_by_node(ddc);
-   if (!output->ddc) {
+   output->connector.ddc = of_find_i2c_adapter_by_node(ddc);
+   if (!output->connector.ddc) {
err = -EPROBE_DEFER;
of_node_put(ddc);
return err;
@@ -174,8 +174,8 @@ void tegra_output_remove(struct tegra_output *output)
if (output->hpd_gpio)
free_irq(output->hpd_irq, output);
 
-   if (output->ddc)
-   put_device(>ddc->dev);
+   if (output->connector.ddc)
+   put_device(>connector.ddc->dev);
 }
 
 int tegra_output_init(struct drm_device *drm, struct tegra_output *output)
diff --git a/drivers/gpu/drm/tegra/sor.c b/drivers/gpu/drm/tegra/sor.c
index 4ffe3794e6d3..77e61f98de07 100644
--- a/drivers/gpu/drm/tegra/sor.c
+++ b/drivers/gpu/drm/tegra/sor.c
@@ -2311,7 +2311,7 @@ static void tegra_sor_hdmi_disable_scrambling(struct 
tegra_sor *sor)
 
 static void tegra_sor_hdmi_scdc_disable(struct tegra_sor *sor)
 {
-   struct i2c_adapter *ddc = sor->output.ddc;
+   struct i2c_adapter *ddc = sor->output.connector.ddc;
 
drm_scdc_set_high_tmds_clock_ratio(ddc, false);
drm_scdc_set_scrambling(ddc, false);
@@ -2339,7 +2339,7 @@ static void tegra_sor_hdmi_enable_scrambling(struct 
tegra_sor *sor)
 
 static void tegra_sor_hdmi_scdc_enable(struct tegra_sor *sor)
 {
-   struct i2c_adapter *ddc = sor->output.ddc;
+   struct i2c_adapter *ddc = sor->output.connector.ddc;
 
drm_scdc_set_high_tmds_clock_ratio(ddc, true);
drm_scdc_set_scrambling(ddc, true);
@@ -2350,7 +2350,7 @@ static void tegra_sor_hdmi_scdc_enable(struct tegra_sor 
*sor)
 static void tegra_sor_hdmi_scdc_work(struct work_struct *work)
 {
struct tegra_sor *sor = container_of(work, struct tegra_sor, scdc.work);
-   struct i2c_adapter *ddc = sor->output.ddc;
+   struct i2c_adapter *ddc = sor->output.connector.ddc;
 
if (!drm_scdc_get_scrambling_status(ddc)) {
DRM_DEBUG_KMS("SCDC not scrambled\n");
-- 
2.17.1

___
amd-gfx mailing list
amd-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/amd-gfx

[PATCH v3 10/22] drm/imx: imx-tve: Provide ddc symlink in connector's sysfs

2019-06-28 Thread Andrzej Pietrasiewicz
Use the ddc pointer provided by the generic connector.

Signed-off-by: Andrzej Pietrasiewicz 
---
 drivers/gpu/drm/imx/imx-tve.c | 8 +++-
 1 file changed, 3 insertions(+), 5 deletions(-)

diff --git a/drivers/gpu/drm/imx/imx-tve.c b/drivers/gpu/drm/imx/imx-tve.c
index e725af8a0025..b8bee4e1f169 100644
--- a/drivers/gpu/drm/imx/imx-tve.c
+++ b/drivers/gpu/drm/imx/imx-tve.c
@@ -109,7 +109,6 @@ struct imx_tve {
 
struct regmap *regmap;
struct regulator *dac_reg;
-   struct i2c_adapter *ddc;
struct clk *clk;
struct clk *di_sel_clk;
struct clk_hw clk_hw_di;
@@ -218,14 +217,13 @@ static int tve_setup_vga(struct imx_tve *tve)
 
 static int imx_tve_connector_get_modes(struct drm_connector *connector)
 {
-   struct imx_tve *tve = con_to_tve(connector);
struct edid *edid;
int ret = 0;
 
-   if (!tve->ddc)
+   if (!connector->ddc)
return 0;
 
-   edid = drm_get_edid(connector, tve->ddc);
+   edid = drm_get_edid(connector, connector->ddc);
if (edid) {
drm_connector_update_edid_property(connector, edid);
ret = drm_add_edid_modes(connector, edid);
@@ -551,7 +549,7 @@ static int imx_tve_bind(struct device *dev, struct device 
*master, void *data)
 
ddc_node = of_parse_phandle(np, "ddc-i2c-bus", 0);
if (ddc_node) {
-   tve->ddc = of_find_i2c_adapter_by_node(ddc_node);
+   tve->connector.ddc = of_find_i2c_adapter_by_node(ddc_node);
of_node_put(ddc_node);
}
 
-- 
2.17.1

___
amd-gfx mailing list
amd-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/amd-gfx

[PATCH v3 04/22] drm: rockchip: Provide ddc symlink in inno_hdmi sysfs directory

2019-06-28 Thread Andrzej Pietrasiewicz
Use the ddc pointer provided by the generic connector.

Signed-off-by: Andrzej Pietrasiewicz 
---
 drivers/gpu/drm/rockchip/inno_hdmi.c | 17 -
 1 file changed, 8 insertions(+), 9 deletions(-)

diff --git a/drivers/gpu/drm/rockchip/inno_hdmi.c 
b/drivers/gpu/drm/rockchip/inno_hdmi.c
index f8ca98d294d0..d64b119c2649 100644
--- a/drivers/gpu/drm/rockchip/inno_hdmi.c
+++ b/drivers/gpu/drm/rockchip/inno_hdmi.c
@@ -59,7 +59,6 @@ struct inno_hdmi {
struct drm_encoder  encoder;
 
struct inno_hdmi_i2c *i2c;
-   struct i2c_adapter *ddc;
 
unsigned int tmds_rate;
 
@@ -552,10 +551,10 @@ static int inno_hdmi_connector_get_modes(struct 
drm_connector *connector)
struct edid *edid;
int ret = 0;
 
-   if (!hdmi->ddc)
+   if (!hdmi->connector.ddc)
return 0;
 
-   edid = drm_get_edid(connector, hdmi->ddc);
+   edid = drm_get_edid(connector, hdmi->connector.ddc);
if (edid) {
hdmi->hdmi_data.sink_is_hdmi = drm_detect_hdmi_monitor(edid);
hdmi->hdmi_data.sink_has_audio = drm_detect_monitor_audio(edid);
@@ -850,10 +849,10 @@ static int inno_hdmi_bind(struct device *dev, struct 
device *master,
 
inno_hdmi_reset(hdmi);
 
-   hdmi->ddc = inno_hdmi_i2c_adapter(hdmi);
-   if (IS_ERR(hdmi->ddc)) {
-   ret = PTR_ERR(hdmi->ddc);
-   hdmi->ddc = NULL;
+   hdmi->connector.ddc = inno_hdmi_i2c_adapter(hdmi);
+   if (IS_ERR(hdmi->connector.ddc)) {
+   ret = PTR_ERR(hdmi->connector.ddc);
+   hdmi->connector.ddc = NULL;
goto err_disable_clk;
}
 
@@ -886,7 +885,7 @@ static int inno_hdmi_bind(struct device *dev, struct device 
*master,
hdmi->connector.funcs->destroy(>connector);
hdmi->encoder.funcs->destroy(>encoder);
 err_put_adapter:
-   i2c_put_adapter(hdmi->ddc);
+   i2c_put_adapter(hdmi->connector.ddc);
 err_disable_clk:
clk_disable_unprepare(hdmi->pclk);
return ret;
@@ -900,7 +899,7 @@ static void inno_hdmi_unbind(struct device *dev, struct 
device *master,
hdmi->connector.funcs->destroy(>connector);
hdmi->encoder.funcs->destroy(>encoder);
 
-   i2c_put_adapter(hdmi->ddc);
+   i2c_put_adapter(hdmi->connector.ddc);
clk_disable_unprepare(hdmi->pclk);
 }
 
-- 
2.17.1

___
amd-gfx mailing list
amd-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/amd-gfx

[PATCH v3 14/22] drm/tilcdc: Provide ddc symlink in connector sysfs directory

2019-06-28 Thread Andrzej Pietrasiewicz
Use the ddc pointer provided by the generic connector.

Signed-off-by: Andrzej Pietrasiewicz 
---
 drivers/gpu/drm/tilcdc/tilcdc_tfp410.c | 1 +
 1 file changed, 1 insertion(+)

diff --git a/drivers/gpu/drm/tilcdc/tilcdc_tfp410.c 
b/drivers/gpu/drm/tilcdc/tilcdc_tfp410.c
index 62d014c20988..c373edb95666 100644
--- a/drivers/gpu/drm/tilcdc/tilcdc_tfp410.c
+++ b/drivers/gpu/drm/tilcdc/tilcdc_tfp410.c
@@ -219,6 +219,7 @@ static struct drm_connector *tfp410_connector_create(struct 
drm_device *dev,
tfp410_connector->mod = mod;
 
connector = _connector->base;
+   connector->ddc = mod->i2c;
 
drm_connector_init(dev, connector, _connector_funcs,
DRM_MODE_CONNECTOR_DVID);
-- 
2.17.1

___
amd-gfx mailing list
amd-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/amd-gfx

[PATCH v3 16/22] drm/mgag200: Provide ddc symlink in connector sysfs directory

2019-06-28 Thread Andrzej Pietrasiewicz
Use the ddc pointer provided by the generic connector.

Signed-off-by: Andrzej Pietrasiewicz 
---
 drivers/gpu/drm/mgag200/mgag200_mode.c | 1 +
 1 file changed, 1 insertion(+)

diff --git a/drivers/gpu/drm/mgag200/mgag200_mode.c 
b/drivers/gpu/drm/mgag200/mgag200_mode.c
index a25054015e8c..a22dbecd4d35 100644
--- a/drivers/gpu/drm/mgag200/mgag200_mode.c
+++ b/drivers/gpu/drm/mgag200/mgag200_mode.c
@@ -1712,6 +1712,7 @@ static struct drm_connector *mga_vga_init(struct 
drm_device *dev)
drm_connector_register(connector);
 
mga_connector->i2c = mgag200_i2c_create(dev);
+   connector->ddc = _connector->i2c->adapter;
if (!mga_connector->i2c)
DRM_ERROR("failed to add ddc bus\n");
 
-- 
2.17.1

___
amd-gfx mailing list
amd-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/amd-gfx

[PATCH v3 21/22] drm/amdgpu: Provide ddc symlink in connector sysfs directory

2019-06-28 Thread Andrzej Pietrasiewicz
Use the ddc pointer provided by the generic connector.

Signed-off-by: Andrzej Pietrasiewicz 
---
 .../gpu/drm/amd/amdgpu/amdgpu_connectors.c| 70 ++-
 1 file changed, 51 insertions(+), 19 deletions(-)

diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_connectors.c 
b/drivers/gpu/drm/amd/amdgpu/amdgpu_connectors.c
index 73b2ede773d3..5f8a7e3818b9 100644
--- a/drivers/gpu/drm/amd/amdgpu/amdgpu_connectors.c
+++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_connectors.c
@@ -1573,11 +1573,15 @@ amdgpu_connector_add(struct amdgpu_device *adev,
goto failed;
amdgpu_connector->con_priv = amdgpu_dig_connector;
if (i2c_bus->valid) {
-   amdgpu_connector->ddc_bus = amdgpu_i2c_lookup(adev, 
i2c_bus);
-   if (amdgpu_connector->ddc_bus)
+   struct amdgpu_connector *acn = amdgpu_connector;
+
+   acn->ddc_bus = amdgpu_i2c_lookup(adev, i2c_bus);
+   if (acn->ddc_bus) {
has_aux = true;
-   else
+   connector->ddc = >ddc_bus->adapter;
+   } else {
DRM_ERROR("DP: Failed to assign ddc bus! Check 
dmesg for i2c errors.\n");
+   }
}
switch (connector_type) {
case DRM_MODE_CONNECTOR_VGA:
@@ -1662,9 +1666,13 @@ amdgpu_connector_add(struct amdgpu_device *adev,
drm_connector_init(dev, _connector->base, 
_connector_vga_funcs, connector_type);
drm_connector_helper_add(_connector->base, 
_connector_vga_helper_funcs);
if (i2c_bus->valid) {
-   amdgpu_connector->ddc_bus = 
amdgpu_i2c_lookup(adev, i2c_bus);
-   if (!amdgpu_connector->ddc_bus)
+   struct amdgpu_connector *acn = amdgpu_connector;
+
+   acn->ddc_bus = amdgpu_i2c_lookup(adev, i2c_bus);
+   if (!acn->ddc_bus)
DRM_ERROR("VGA: Failed to assign ddc 
bus! Check dmesg for i2c errors.\n");
+   else
+   connector->ddc = >ddc_bus->adapter;
}
amdgpu_connector->dac_load_detect = true;
drm_object_attach_property(_connector->base.base,
@@ -1682,9 +1690,13 @@ amdgpu_connector_add(struct amdgpu_device *adev,
drm_connector_init(dev, _connector->base, 
_connector_vga_funcs, connector_type);
drm_connector_helper_add(_connector->base, 
_connector_vga_helper_funcs);
if (i2c_bus->valid) {
-   amdgpu_connector->ddc_bus = 
amdgpu_i2c_lookup(adev, i2c_bus);
-   if (!amdgpu_connector->ddc_bus)
+   struct amdgpu_connector *acn = amdgpu_connector;
+
+   acn->ddc_bus = amdgpu_i2c_lookup(adev, i2c_bus);
+   if (!acn->ddc_bus)
DRM_ERROR("DVIA: Failed to assign ddc 
bus! Check dmesg for i2c errors.\n");
+   else
+   connector->ddc = >ddc_bus->adapter;
}
amdgpu_connector->dac_load_detect = true;
drm_object_attach_property(_connector->base.base,
@@ -1707,9 +1719,13 @@ amdgpu_connector_add(struct amdgpu_device *adev,
drm_connector_init(dev, _connector->base, 
_connector_dvi_funcs, connector_type);
drm_connector_helper_add(_connector->base, 
_connector_dvi_helper_funcs);
if (i2c_bus->valid) {
-   amdgpu_connector->ddc_bus = 
amdgpu_i2c_lookup(adev, i2c_bus);
-   if (!amdgpu_connector->ddc_bus)
+   struct amdgpu_connector *acn = amdgpu_connector;
+
+   acn->ddc_bus = amdgpu_i2c_lookup(adev, i2c_bus);
+   if (!acn->ddc_bus)
DRM_ERROR("DVI: Failed to assign ddc 
bus! Check dmesg for i2c errors.\n");
+   else
+   connector->ddc = >ddc_bus->adapter;
}
subpixel_order = SubPixelHorizontalRGB;
drm_object_attach_property(_connector->base.base,
@@ -1757,9 +1773,13 @@ amdgpu_connector_add(struct amdgpu_device *adev,
drm_connector_init(dev, _connector->base, 
_connector_dvi_funcs, connector_type);
drm_connector_helper_add(_connector->base, 
_connector_dvi_helper_funcs);

[PATCH v3 13/22] drm: zte: Provide ddc symlink in vga connector sysfs directory

2019-06-28 Thread Andrzej Pietrasiewicz
Use the ddc pointer provided by the generic connector.

Signed-off-by: Andrzej Pietrasiewicz 
---
 drivers/gpu/drm/zte/zx_vga.c | 25 +
 1 file changed, 9 insertions(+), 16 deletions(-)

diff --git a/drivers/gpu/drm/zte/zx_vga.c b/drivers/gpu/drm/zte/zx_vga.c
index 1634a08707fb..a3a4d6982888 100644
--- a/drivers/gpu/drm/zte/zx_vga.c
+++ b/drivers/gpu/drm/zte/zx_vga.c
@@ -23,15 +23,11 @@ struct zx_vga_pwrctrl {
u32 mask;
 };
 
-struct zx_vga_i2c {
-   struct i2c_adapter adap;
-   struct mutex lock;
-};
-
 struct zx_vga {
struct drm_connector connector;
struct drm_encoder encoder;
-   struct zx_vga_i2c *ddc;
+   /* protects ddc access */
+   struct mutex ddc_lock;
struct device *dev;
void __iomem *mmio;
struct clk *i2c_wclk;
@@ -86,7 +82,7 @@ static int zx_vga_connector_get_modes(struct drm_connector 
*connector)
 */
zx_writel(vga->mmio + VGA_AUTO_DETECT_SEL, 0);
 
-   edid = drm_get_edid(connector, >ddc->adap);
+   edid = drm_get_edid(connector, connector->ddc);
if (!edid) {
/*
 * If EDID reading fails, we set the device state into
@@ -282,11 +278,10 @@ static int zx_vga_i2c_xfer(struct i2c_adapter *adap, 
struct i2c_msg *msgs,
   int num)
 {
struct zx_vga *vga = i2c_get_adapdata(adap);
-   struct zx_vga_i2c *ddc = vga->ddc;
int ret = 0;
int i;
 
-   mutex_lock(>lock);
+   mutex_lock(>ddc_lock);
 
for (i = 0; i < num; i++) {
if (msgs[i].flags & I2C_M_RD)
@@ -301,7 +296,7 @@ static int zx_vga_i2c_xfer(struct i2c_adapter *adap, struct 
i2c_msg *msgs,
if (!ret)
ret = num;
 
-   mutex_unlock(>lock);
+   mutex_unlock(>ddc_lock);
 
return ret;
 }
@@ -320,17 +315,15 @@ static int zx_vga_ddc_register(struct zx_vga *vga)
 {
struct device *dev = vga->dev;
struct i2c_adapter *adap;
-   struct zx_vga_i2c *ddc;
int ret;
 
-   ddc = devm_kzalloc(dev, sizeof(*ddc), GFP_KERNEL);
-   if (!ddc)
+   adap = devm_kzalloc(dev, sizeof(*adap), GFP_KERNEL);
+   if (!adap)
return -ENOMEM;
 
-   vga->ddc = ddc;
-   mutex_init(>lock);
+   vga->connector.ddc = adap;
+   mutex_init(>ddc_lock);
 
-   adap = >adap;
adap->owner = THIS_MODULE;
adap->class = I2C_CLASS_DDC;
adap->dev.parent = dev;
-- 
2.17.1

___
amd-gfx mailing list
amd-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/amd-gfx

[PATCH v3 15/22] drm: sti: Provide ddc symlink in hdmi connector sysfs directory

2019-06-28 Thread Andrzej Pietrasiewicz
Use the ddc pointer provided by the generic connector.

Signed-off-by: Andrzej Pietrasiewicz 
---
 drivers/gpu/drm/sti/sti_hdmi.c | 1 +
 1 file changed, 1 insertion(+)

diff --git a/drivers/gpu/drm/sti/sti_hdmi.c b/drivers/gpu/drm/sti/sti_hdmi.c
index f03d617edc4c..c5e6c33ff2cd 100644
--- a/drivers/gpu/drm/sti/sti_hdmi.c
+++ b/drivers/gpu/drm/sti/sti_hdmi.c
@@ -1288,6 +1288,7 @@ static int sti_hdmi_bind(struct device *dev, struct 
device *master, void *data)
_hdmi_connector_funcs, DRM_MODE_CONNECTOR_HDMIA);
drm_connector_helper_add(drm_connector,
_hdmi_connector_helper_funcs);
+   drm_connector->ddc = hdmi->ddc_adapt;
 
/* initialise property */
sti_hdmi_connector_init_property(drm_dev, drm_connector);
-- 
2.17.1

___
amd-gfx mailing list
amd-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/amd-gfx

[PATCH v3 19/22] drm/bridge: dw-hdmi: Provide ddc symlink in connector sysfs directory

2019-06-28 Thread Andrzej Pietrasiewicz
Use the ddc pointer provided by the generic connector.

Signed-off-by: Andrzej Pietrasiewicz 
---
 drivers/gpu/drm/bridge/synopsys/dw-hdmi.c | 40 +++
 1 file changed, 20 insertions(+), 20 deletions(-)

diff --git a/drivers/gpu/drm/bridge/synopsys/dw-hdmi.c 
b/drivers/gpu/drm/bridge/synopsys/dw-hdmi.c
index c6490949d9db..0b9c9f2619da 100644
--- a/drivers/gpu/drm/bridge/synopsys/dw-hdmi.c
+++ b/drivers/gpu/drm/bridge/synopsys/dw-hdmi.c
@@ -161,7 +161,6 @@ struct dw_hdmi {
 
struct drm_display_mode previous_mode;
 
-   struct i2c_adapter *ddc;
void __iomem *regs;
bool sink_is_hdmi;
bool sink_has_audio;
@@ -1118,7 +1117,7 @@ static bool dw_hdmi_support_scdc(struct dw_hdmi *hdmi)
return false;
 
/* Disable if no DDC bus */
-   if (!hdmi->ddc)
+   if (!hdmi->connector.ddc)
return false;
 
/* Disable if SCDC is not supported, or if an HF-VSDB block is absent */
@@ -1156,10 +1155,11 @@ void dw_hdmi_set_high_tmds_clock_ratio(struct dw_hdmi 
*hdmi)
 
/* Control for TMDS Bit Period/TMDS Clock-Period Ratio */
if (dw_hdmi_support_scdc(hdmi)) {
+   struct i2c_adapter *ddc = hdmi->connector.ddc;
if (mtmdsclock > HDMI14_MAX_TMDSCLK)
-   drm_scdc_set_high_tmds_clock_ratio(hdmi->ddc, 1);
+   drm_scdc_set_high_tmds_clock_ratio(ddc, 1);
else
-   drm_scdc_set_high_tmds_clock_ratio(hdmi->ddc, 0);
+   drm_scdc_set_high_tmds_clock_ratio(ddc, 0);
}
 }
 EXPORT_SYMBOL_GPL(dw_hdmi_set_high_tmds_clock_ratio);
@@ -1750,6 +1750,7 @@ static void hdmi_av_composer(struct dw_hdmi *hdmi,
if (dw_hdmi_support_scdc(hdmi)) {
if (vmode->mtmdsclock > HDMI14_MAX_TMDSCLK ||
hdmi_info->scdc.scrambling.low_rates) {
+   struct i2c_adapter *ddc = hdmi->connector.ddc;
/*
 * HDMI2.0 Specifies the following procedure:
 * After the Source Device has determined that
@@ -1759,13 +1760,12 @@ static void hdmi_av_composer(struct dw_hdmi *hdmi,
 * Source Devices compliant shall set the
 * Source Version = 1.
 */
-   drm_scdc_readb(hdmi->ddc, SCDC_SINK_VERSION,
-  );
-   drm_scdc_writeb(hdmi->ddc, SCDC_SOURCE_VERSION,
+   drm_scdc_readb(ddc, SCDC_SINK_VERSION, );
+   drm_scdc_writeb(ddc, SCDC_SOURCE_VERSION,
min_t(u8, bytes, SCDC_MIN_SOURCE_VERSION));
 
/* Enabled Scrambling in the Sink */
-   drm_scdc_set_scrambling(hdmi->ddc, 1);
+   drm_scdc_set_scrambling(hdmi->connector.ddc, 1);
 
/*
 * To activate the scrambler feature, you must ensure
@@ -1781,7 +1781,7 @@ static void hdmi_av_composer(struct dw_hdmi *hdmi,
hdmi_writeb(hdmi, 0, HDMI_FC_SCRAMBLER_CTRL);
hdmi_writeb(hdmi, (u8)~HDMI_MC_SWRSTZ_TMDSSWRST_REQ,
HDMI_MC_SWRSTZ);
-   drm_scdc_set_scrambling(hdmi->ddc, 0);
+   drm_scdc_set_scrambling(hdmi->connector.ddc, 0);
}
}
 
@@ -2127,10 +2127,10 @@ static int dw_hdmi_connector_get_modes(struct 
drm_connector *connector)
struct edid *edid;
int ret = 0;
 
-   if (!hdmi->ddc)
+   if (!hdmi->connector.ddc)
return 0;
 
-   edid = drm_get_edid(connector, hdmi->ddc);
+   edid = drm_get_edid(connector, hdmi->connector.ddc);
if (edid) {
dev_dbg(hdmi->dev, "got edid: width[%d] x height[%d]\n",
edid->width_cm, edid->height_cm);
@@ -2548,9 +2548,9 @@ __dw_hdmi_probe(struct platform_device *pdev,
 
ddc_node = of_parse_phandle(np, "ddc-i2c-bus", 0);
if (ddc_node) {
-   hdmi->ddc = of_get_i2c_adapter_by_node(ddc_node);
+   hdmi->connector.ddc = of_get_i2c_adapter_by_node(ddc_node);
of_node_put(ddc_node);
-   if (!hdmi->ddc) {
+   if (!hdmi->connector.ddc) {
dev_dbg(hdmi->dev, "failed to read ddc node\n");
return ERR_PTR(-EPROBE_DEFER);
}
@@ -2689,7 +2689,7 @@ __dw_hdmi_probe(struct platform_device *pdev,
hdmi_init_clk_regenerator(hdmi);
 
/* If DDC bus is not specified, try to register HDMI I2C bus */
-   if (!hdmi->ddc) {
+   if (!hdmi->connector.ddc) {
/* Look for (optional) stuff related to unwedging */
hdmi->pinctrl = devm_pinctrl_get(dev);
if (!IS_ERR(hdmi->pinctrl)) {
@@ -2708,9 +2708,9 @@ 

[PATCH v3 17/22] drm/ast: Provide ddc symlink in connector sysfs directory

2019-06-28 Thread Andrzej Pietrasiewicz
Use the ddc pointer provided by the generic connector.

Signed-off-by: Andrzej Pietrasiewicz 
---
 drivers/gpu/drm/ast/ast_mode.c | 1 +
 1 file changed, 1 insertion(+)

diff --git a/drivers/gpu/drm/ast/ast_mode.c b/drivers/gpu/drm/ast/ast_mode.c
index ffccbef962a4..155c3487a1a7 100644
--- a/drivers/gpu/drm/ast/ast_mode.c
+++ b/drivers/gpu/drm/ast/ast_mode.c
@@ -905,6 +905,7 @@ static int ast_connector_init(struct drm_device *dev)
drm_connector_attach_encoder(connector, encoder);
 
ast_connector->i2c = ast_i2c_create(dev);
+   connector->ddc = _connector->i2c->adapter;
if (!ast_connector->i2c)
DRM_ERROR("failed to add ddc bus for connector\n");
 
-- 
2.17.1

___
amd-gfx mailing list
amd-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/amd-gfx

[PATCH v3 02/22] drm/exynos: Provide ddc symlink in connector's sysfs

2019-06-28 Thread Andrzej Pietrasiewicz
Switch to using the ddc provided by the generic connector.

Signed-off-by: Andrzej Pietrasiewicz 
---
 drivers/gpu/drm/exynos/exynos_hdmi.c | 11 +--
 1 file changed, 5 insertions(+), 6 deletions(-)

diff --git a/drivers/gpu/drm/exynos/exynos_hdmi.c 
b/drivers/gpu/drm/exynos/exynos_hdmi.c
index 894a99793633..6816e37861b7 100644
--- a/drivers/gpu/drm/exynos/exynos_hdmi.c
+++ b/drivers/gpu/drm/exynos/exynos_hdmi.c
@@ -126,7 +126,6 @@ struct hdmi_context {
void __iomem*regs;
void __iomem*regs_hdmiphy;
struct i2c_client   *hdmiphy_port;
-   struct i2c_adapter  *ddc_adpt;
struct gpio_desc*hpd_gpio;
int irq;
struct regmap   *pmureg;
@@ -872,10 +871,10 @@ static int hdmi_get_modes(struct drm_connector *connector)
struct edid *edid;
int ret;
 
-   if (!hdata->ddc_adpt)
+   if (!connector->ddc)
return -ENODEV;
 
-   edid = drm_get_edid(connector, hdata->ddc_adpt);
+   edid = drm_get_edid(connector, connector->ddc);
if (!edid)
return -ENODEV;
 
@@ -1893,7 +1892,7 @@ static int hdmi_get_ddc_adapter(struct hdmi_context 
*hdata)
return -EPROBE_DEFER;
}
 
-   hdata->ddc_adpt = adpt;
+   hdata->connector.ddc = adpt;
 
return 0;
 }
@@ -2045,7 +2044,7 @@ static int hdmi_probe(struct platform_device *pdev)
if (hdata->regs_hdmiphy)
iounmap(hdata->regs_hdmiphy);
 err_ddc:
-   put_device(>ddc_adpt->dev);
+   put_device(>connector.ddc->dev);
 
return ret;
 }
@@ -2072,7 +2071,7 @@ static int hdmi_remove(struct platform_device *pdev)
if (hdata->regs_hdmiphy)
iounmap(hdata->regs_hdmiphy);
 
-   put_device(>ddc_adpt->dev);
+   put_device(>connector.ddc->dev);
 
mutex_destroy(>mutex);
 
-- 
2.17.1

___
amd-gfx mailing list
amd-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/amd-gfx

Re: [PATCH v3 00/22] Associate ddc adapters with connectors

2019-06-28 Thread Laurent Pinchart
Hi Andrzej,

Just FYI, I have a patch series that reworks how bridges and connectors
are handled, and it will heavily conflict with this. The purpose of the
two series isn't the same, so both make sense. I will post the patches
this weekend, and will then review this series in that context.
Hopefully we'll get the best of both worlds :-)

On Fri, Jun 28, 2019 at 06:01:14PM +0200, Andrzej Pietrasiewicz wrote:
> It is difficult for a user to know which of the i2c adapters is for which
> drm connector. This series addresses this problem.
> 
> The idea is to have a symbolic link in connector's sysfs directory, e.g.:
> 
> ls -l /sys/class/drm/card0-HDMI-A-1/ddc
> lrwxrwxrwx 1 root root 0 Jun 24 10:42 /sys/class/drm/card0-HDMI-A-1/ddc \
>   -> ../../../../soc/1388.i2c/i2c-2
> 
> The user then knows that their card0-HDMI-A-1 uses i2c-2 and can e.g. run
> ddcutil:
> 
> ddcutil -b 2 getvcp 0x10
> VCP code 0x10 (Brightness): current value =90, max 
> value =   100
> 
> The first patch in the series adds struct i2c_adapter pointer to struct
> drm_connector. If the field is used by a particular driver, then an
> appropriate symbolic link is created by the generic code, which is also added
> by this patch.
> 
> The second patch is an example of how to convert a driver to this new scheme.
> 
> v1..v2:
> 
> - used fixed name "ddc" for the symbolic link in order to make it easy for
> userspace to find the i2c adapter
> 
> v2..v3:
> 
> - converted as many drivers as possible.
> 
> PATCHES 3/22-22/22 SHOULD BE CONSIDERED RFC!
> 
> Andrzej Pietrasiewicz (22):
>   drm: Include ddc adapter pointer in struct drm_connector
>   drm/exynos: Provide ddc symlink in connector's sysfs
>   drm: rockchip: Provide ddc symlink in rk3066_hdmi sysfs directory
>   drm: rockchip: Provide ddc symlink in inno_hdmi sysfs directory
>   drm/msm/hdmi: Provide ddc symlink in hdmi connector sysfs directory
>   drm/sun4i: hdmi: Provide ddc symlink in sun4i hdmi connector sysfs
> directory
>   drm/mediatek: Provide ddc symlink in hdmi connector sysfs directory
>   drm/tegra: Provide ddc symlink in output connector sysfs directory
>   drm/imx: imx-ldb: Provide ddc symlink in connector's sysfs
>   drm/imx: imx-tve: Provide ddc symlink in connector's sysfs
>   drm/vc4: Provide ddc symlink in connector sysfs directory
>   drm: zte: Provide ddc symlink in hdmi connector sysfs directory
>   drm: zte: Provide ddc symlink in vga connector sysfs directory
>   drm/tilcdc: Provide ddc symlink in connector sysfs directory
>   drm: sti: Provide ddc symlink in hdmi connector sysfs directory
>   drm/mgag200: Provide ddc symlink in connector sysfs directory
>   drm/ast: Provide ddc symlink in connector sysfs directory
>   drm/bridge: dumb-vga-dac: Provide ddc symlink in connector sysfs
> directory
>   drm/bridge: dw-hdmi: Provide ddc symlink in connector sysfs directory
>   drm/bridge: ti-tfp410: Provide ddc symlink in connector sysfs
> directory
>   drm/amdgpu: Provide ddc symlink in connector sysfs directory
>   drm/radeon: Provide ddc symlink in connector sysfs directory
> 
>  .../gpu/drm/amd/amdgpu/amdgpu_connectors.c| 70 +++-
>  drivers/gpu/drm/ast/ast_mode.c|  1 +
>  drivers/gpu/drm/bridge/dumb-vga-dac.c | 19 ++---
>  drivers/gpu/drm/bridge/synopsys/dw-hdmi.c | 40 -
>  drivers/gpu/drm/bridge/ti-tfp410.c| 19 ++---
>  drivers/gpu/drm/drm_sysfs.c   |  7 ++
>  drivers/gpu/drm/exynos/exynos_hdmi.c  | 11 ++-
>  drivers/gpu/drm/imx/imx-ldb.c | 13 ++-
>  drivers/gpu/drm/imx/imx-tve.c |  8 +-
>  drivers/gpu/drm/mediatek/mtk_hdmi.c   |  9 +-
>  drivers/gpu/drm/mgag200/mgag200_mode.c|  1 +
>  drivers/gpu/drm/msm/hdmi/hdmi_connector.c |  1 +
>  drivers/gpu/drm/radeon/radeon_connectors.c| 82 ++-
>  drivers/gpu/drm/rockchip/inno_hdmi.c  | 17 ++--
>  drivers/gpu/drm/rockchip/rk3066_hdmi.c| 17 ++--
>  drivers/gpu/drm/sti/sti_hdmi.c|  1 +
>  drivers/gpu/drm/sun4i/sun4i_hdmi.h|  1 -
>  drivers/gpu/drm/sun4i/sun4i_hdmi_enc.c| 14 ++--
>  drivers/gpu/drm/tegra/drm.h   |  1 -
>  drivers/gpu/drm/tegra/output.c| 12 +--
>  drivers/gpu/drm/tegra/sor.c   |  6 +-
>  drivers/gpu/drm/tilcdc/tilcdc_tfp410.c|  1 +
>  drivers/gpu/drm/vc4/vc4_hdmi.c| 16 ++--
>  drivers/gpu/drm/zte/zx_hdmi.c | 25 ++
>  drivers/gpu/drm/zte/zx_vga.c  | 25 ++
>  include/drm/drm_connector.h   | 11 +++
>  26 files changed, 252 insertions(+), 176 deletions(-)
> 
> -- 
> 2.17.1
> 

-- 
Regards,

Laurent Pinchart
___
amd-gfx mailing list
amd-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/amd-gfx

Re: [PATCH][next] drm/amdgpu/mes10.1: fix duplicated assignment to adev->mes.ucode_fw_version

2019-06-28 Thread Abramov, Slava
Acked-by: Slava Abramov 


From: amd-gfx  on behalf of Colin King 

Sent: Friday, June 28, 2019 11:05:39 AM
To: Deucher, Alexander; Koenig, Christian; Zhou, David(ChunMing); David Airlie; 
Daniel Vetter; Xiao, Jack; Zhang, Hawking; amd-gfx@lists.freedesktop.org; 
dri-de...@lists.freedesktop.org
Cc: kernel-janit...@vger.kernel.org; linux-ker...@vger.kernel.org
Subject: [PATCH][next] drm/amdgpu/mes10.1: fix duplicated assignment to 
adev->mes.ucode_fw_version

From: Colin Ian King 

Currently adev->mes.ucode_fw_version is being assigned twice with
different values. This looks like a cut-n-paste error and instead
the second assignment should be adev->mes.data_fw_version. Fix
this.

Addresses-Coverity: ("Unused value")
Fixes: 298d05460cc4 ("drm/amdgpu/mes10.1: load mes firmware file to CPU buffer")
Signed-off-by: Colin Ian King 
---
 drivers/gpu/drm/amd/amdgpu/mes_v10_1.c | 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)

diff --git a/drivers/gpu/drm/amd/amdgpu/mes_v10_1.c 
b/drivers/gpu/drm/amd/amdgpu/mes_v10_1.c
index 29fab7984855..2a27f0b30bb5 100644
--- a/drivers/gpu/drm/amd/amdgpu/mes_v10_1.c
+++ b/drivers/gpu/drm/amd/amdgpu/mes_v10_1.c
@@ -91,7 +91,7 @@ static int mes_v10_1_init_microcode(struct amdgpu_device 
*adev)

 mes_hdr = (const struct mes_firmware_header_v1_0 *)adev->mes.fw->data;
 adev->mes.ucode_fw_version = le32_to_cpu(mes_hdr->mes_ucode_version);
-   adev->mes.ucode_fw_version =
+   adev->mes.data_fw_version =
 le32_to_cpu(mes_hdr->mes_ucode_data_version);
 adev->mes.uc_start_addr =
 le32_to_cpu(mes_hdr->mes_uc_start_addr_lo) |
--
2.20.1

___
amd-gfx mailing list
amd-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/amd-gfx
___
amd-gfx mailing list
amd-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/amd-gfx

Re: [PATCH][next] drm/amd/powerplay: fix out of memory check on od8_settings

2019-06-28 Thread Abramov, Slava
Acked-by: Slava Abramov 


From: amd-gfx  on behalf of Colin King 

Sent: Friday, June 28, 2019 11:13:54 AM
To: Wang, Kevin(Yang); Rex Zhu; Quan, Evan; Deucher, Alexander; Koenig, 
Christian; Zhou, David(ChunMing); David Airlie; Daniel Vetter; 
amd-gfx@lists.freedesktop.org; dri-de...@lists.freedesktop.org
Cc: kernel-janit...@vger.kernel.org; linux-ker...@vger.kernel.org
Subject: [PATCH][next] drm/amd/powerplay: fix out of memory check on 
od8_settings

From: Colin Ian King 

The null pointer check on od8_settings is currently the opposite of what
it is intended to do. Fix this by adding in the missing ! operator.

Addressed-Coverity: ("Resource leak")
Fixes: 0c83d32c565c ("drm/amd/powerplay: simplified od_settings for each asic")
Signed-off-by: Colin Ian King 
---
 drivers/gpu/drm/amd/powerplay/vega20_ppt.c | 3 +--
 1 file changed, 1 insertion(+), 2 deletions(-)

diff --git a/drivers/gpu/drm/amd/powerplay/vega20_ppt.c 
b/drivers/gpu/drm/amd/powerplay/vega20_ppt.c
index 0f14fe14ecd8..eb9e6b3a5265 100644
--- a/drivers/gpu/drm/amd/powerplay/vega20_ppt.c
+++ b/drivers/gpu/drm/amd/powerplay/vega20_ppt.c
@@ -1501,8 +1501,7 @@ static int vega20_set_default_od8_setttings(struct 
smu_context *smu)
 return -EINVAL;

 od8_settings = kzalloc(sizeof(struct vega20_od8_settings), GFP_KERNEL);
-
-   if (od8_settings)
+   if (!od8_settings)
 return -ENOMEM;

 smu->od_settings = (void *)od8_settings;
--
2.20.1

___
amd-gfx mailing list
amd-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/amd-gfx
___
amd-gfx mailing list
amd-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/amd-gfx

[PATCH][next] drm/amd/powerplay: remove a less than zero uint32_t check

2019-06-28 Thread Colin King
From: Colin Ian King 

The check to see if the uint32_t variable 'size' is less than zero
is redundant as it is unsigned and can never be less than zero.
Remove this redundant check.

Addresses-Coverity: ("Unsigned compared to zero")
Signed-off-by: Colin Ian King 
---
 drivers/gpu/drm/amd/powerplay/navi10_ppt.c | 3 ---
 1 file changed, 3 deletions(-)

diff --git a/drivers/gpu/drm/amd/powerplay/navi10_ppt.c 
b/drivers/gpu/drm/amd/powerplay/navi10_ppt.c
index ac151da..6ea48d6 100644
--- a/drivers/gpu/drm/amd/powerplay/navi10_ppt.c
+++ b/drivers/gpu/drm/amd/powerplay/navi10_ppt.c
@@ -1043,9 +1043,6 @@ static int navi10_set_power_profile_mode(struct 
smu_context *smu, long *input, u
}
 
if (smu->power_profile_mode == PP_SMC_POWER_PROFILE_CUSTOM) {
-   if (size < 0)
-   return -EINVAL;
-
ret = smu_update_table(smu,
   SMU_TABLE_ACTIVITY_MONITOR_COEFF | 
WORKLOAD_PPLIB_CUSTOM_BIT << 16,
   (void *)(_monitor), false);
-- 
2.7.4



Re: [PATCH 1/5] drm/amdgpu: allow direct submission in the VM backends

2019-06-28 Thread Christian König

Am 28.06.19 um 16:30 schrieb Chunming Zhou:

在 2019/6/28 20:18, Christian König 写道:

This allows us to update page tables directly while in a page fault.

Signed-off-by: Christian König 
---
   drivers/gpu/drm/amd/amdgpu/amdgpu_vm.h  |  5 
   drivers/gpu/drm/amd/amdgpu/amdgpu_vm_cpu.c  |  4 +++
   drivers/gpu/drm/amd/amdgpu/amdgpu_vm_sdma.c | 29 +
   3 files changed, 27 insertions(+), 11 deletions(-)

diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_vm.h 
b/drivers/gpu/drm/amd/amdgpu/amdgpu_vm.h
index 489a162ca620..5941accea061 100644
--- a/drivers/gpu/drm/amd/amdgpu/amdgpu_vm.h
+++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_vm.h
@@ -197,6 +197,11 @@ struct amdgpu_vm_update_params {
 */
struct amdgpu_vm *vm;
   
+	/**

+* @direct: if changes should be made directly
+*/
+   bool direct;
+
/**
 * @pages_addr:
 *
diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_vm_cpu.c 
b/drivers/gpu/drm/amd/amdgpu/amdgpu_vm_cpu.c
index 5222d165abfc..f94e4896079c 100644
--- a/drivers/gpu/drm/amd/amdgpu/amdgpu_vm_cpu.c
+++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_vm_cpu.c
@@ -49,6 +49,10 @@ static int amdgpu_vm_cpu_prepare(struct 
amdgpu_vm_update_params *p, void *owner,
   {
int r;
   
+	/* Don't wait for anything during page fault */

+   if (p->direct)
+   return 0;
+
/* Wait for PT BOs to be idle. PTs share the same resv. object
 * as the root PD BO
 */
diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_vm_sdma.c 
b/drivers/gpu/drm/amd/amdgpu/amdgpu_vm_sdma.c
index ddd181f5ed37..891d597063cb 100644
--- a/drivers/gpu/drm/amd/amdgpu/amdgpu_vm_sdma.c
+++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_vm_sdma.c
@@ -68,17 +68,17 @@ static int amdgpu_vm_sdma_prepare(struct 
amdgpu_vm_update_params *p,
if (r)
return r;
   
-	r = amdgpu_sync_fence(p->adev, >job->sync, exclusive, false);

-   if (r)
-   return r;
+   p->num_dw_left = ndw;
+
+   if (p->direct)
+   return 0;
   
-	r = amdgpu_sync_resv(p->adev, >job->sync, root->tbo.resv,

-owner, false);
+   r = amdgpu_sync_fence(p->adev, >job->sync, exclusive, false);
if (r)
return r;
   
-	p->num_dw_left = ndw;

-   return 0;
+   return amdgpu_sync_resv(p->adev, >job->sync, root->tbo.resv,
+   owner, false);
   }
   
   /**

@@ -99,13 +99,21 @@ static int amdgpu_vm_sdma_commit(struct 
amdgpu_vm_update_params *p,
struct dma_fence *f;
int r;
   
-	ring = container_of(p->vm->entity.rq->sched, struct amdgpu_ring, sched);

+   if (p->direct)
+   ring = p->adev->vm_manager.page_fault;
+   else
+   ring = container_of(p->vm->entity.rq->sched,
+   struct amdgpu_ring, sched);
   
   	WARN_ON(ib->length_dw == 0);

amdgpu_ring_pad_ib(ring, ib);
WARN_ON(ib->length_dw > p->num_dw_left);
-   r = amdgpu_job_submit(p->job, >vm->entity,
- AMDGPU_FENCE_OWNER_VM, );
+
+   if (p->direct)
+   r = amdgpu_job_submit_direct(p->job, ring, );

When we use direct submission after intialization, we need to take care
of ring race condision, don't we? Am I missing anything?


Direct submission can only used by the page fault worker thread. And at 
least for now there is exactly one worker thread.


Christian.




-David


+   else
+   r = amdgpu_job_submit(p->job, >vm->entity,
+ AMDGPU_FENCE_OWNER_VM, );
if (r)
goto error;
   
@@ -120,7 +128,6 @@ static int amdgpu_vm_sdma_commit(struct amdgpu_vm_update_params *p,

return r;
   }
   
-

   /**
* amdgpu_vm_sdma_copy_ptes - copy the PTEs from mapping
*


___
amd-gfx mailing list
amd-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/amd-gfx

Re: [PATCH 2/2] dma-buf: cleanup shared fence removal

2019-06-28 Thread Koenig, Christian
Am 28.06.19 um 16:38 schrieb Daniel Vetter:
[SNIP]
> - when you submit command buffers, you _dont_ attach fences to all
> involved buffers
 That's not going to work because then the memory management then thinks
 that the buffer is immediately movable, which it isn't,
>>> I guess we need to fix that then. I pretty much assumed that
>>> ->notify_move could add whatever fences you might want to add. Which
>>> would very neatly allow us to solve this problem here, instead of
>>> coming up with fake fences and fun stuff like that.
>> Adding the fence later on is not a solution because we need something
>> which beforehand can check if a buffer is movable or not.
>>
>> In the case of a move_notify the decision to move it is already done and
>> you can't say oh sorry I have to evict my process and reprogram the
>> hardware or whatever.
>>
>> Especially when you do this in an OOM situation.
> Why? I mean when the fence for a CS is there already, it might also
> still hang out in the scheduler, or be blocked on a fence from another
> driver, or anything like that. I don't see a conceptual difference.
> Plus with dynamic dma-buf the entire point is that an attached fences
> does _not_ mean the buffer is permanently pinned, but can be moved if
> you sync correctly. Might need a bit of tuning or a flag to indicate
> that some buffers should alwasy considered to be busy, and that you
> shouldn't start evicting those. But that's kinda a detail.
>
>  From a very high level there's really no difference betwen
> ->notify_move and the eviction_fence. Both give you a callback when
> someone else needs to move the buffer, that's all. The only difference
> is that the eviction_fence thing jumbles the callback and the fence
> into one, by preattaching a fence just in case. But again from a
> conceptual pov it doesn't matter whether the fence is always hanging
> around, or whether you just attach it when ->notify_move is called.

Sure there is a difference. See when you attach the fence beforehand the 
memory management can know that the buffer is busy.

Just imagine the following: We are in an OOM situation and need to swap 
things out to disk!

When the fence is attached beforehand the handling can be as following:
1. MM picks a BO from the LRU and starts to evict it.
2. The eviction fence is enabled and we stop the process using this BO.
3. As soon as the process is stopped the fence is set into the signaled 
state.
4. MM needs to evict more BOs and since the fence for this process is 
now in the signaled state it can intentionally pick the ones up which 
are now idle.

When we attach the fence only on eviction that can't happen and the MM 
would just pick the next random BO and potentially stop another process.

So I think we can summarize that the memory management definitely needs 
to know beforehand how costly it is to evict a BO.

And of course implement this with flags or use counters or whatever, but 
we already have the fence infrastructure and I don't see a reason not to 
use it.

Regards,
Christian.


___
amd-gfx mailing list
amd-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/amd-gfx

Re: [PATCH] drm/amd/powerplay: fix incorrect assignments to mclk_mask and soc_mask

2019-06-28 Thread Abramov, Slava
Acked-by: Slava Abramov 


From: amd-gfx  on behalf of Colin King 

Sent: Friday, June 28, 2019 10:45:17 AM
To: Wang, Kevin(Yang); Rex Zhu; Quan, Evan; Deucher, Alexander; Koenig, 
Christian; Zhou, David(ChunMing); David Airlie; Daniel Vetter; 
amd-gfx@lists.freedesktop.org; dri-de...@lists.freedesktop.org
Cc: kernel-janit...@vger.kernel.org; linux-ker...@vger.kernel.org
Subject: [PATCH] drm/amd/powerplay: fix incorrect assignments to mclk_mask and 
soc_mask

From: Colin Ian King 

There are null pointer checks on mlck_mask and soc_mask however the
sclk_mask is being used in assignments in what looks to be a cut-n-paste
coding error. Fix this by using the correct pointers in the assignments.

Addresses-Coverity: ("Dereference after null check")
Fixes: 2d9fb9b06643 ("drm/amd/powerplay: add function get_profiling_clk_mask 
for navi10")
Signed-off-by: Colin Ian King 
---
 drivers/gpu/drm/amd/powerplay/navi10_ppt.c | 4 ++--
 1 file changed, 2 insertions(+), 2 deletions(-)

diff --git a/drivers/gpu/drm/amd/powerplay/navi10_ppt.c 
b/drivers/gpu/drm/amd/powerplay/navi10_ppt.c
index 27e5c80..ac151da 100644
--- a/drivers/gpu/drm/amd/powerplay/navi10_ppt.c
+++ b/drivers/gpu/drm/amd/powerplay/navi10_ppt.c
@@ -1134,14 +1134,14 @@ static int navi10_get_profiling_clk_mask(struct 
smu_context *smu,
 ret = smu_get_dpm_level_count(smu, SMU_MCLK, 
_count);
 if (ret)
 return ret;
-   *sclk_mask = level_count - 1;
+   *mclk_mask = level_count - 1;
 }

 if(soc_mask) {
 ret = smu_get_dpm_level_count(smu, SMU_SOCCLK, 
_count);
 if (ret)
 return ret;
-   *sclk_mask = level_count - 1;
+   *soc_mask = level_count - 1;
 }
 }

--
2.7.4

___
amd-gfx mailing list
amd-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/amd-gfx
___
amd-gfx mailing list
amd-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/amd-gfx

Re: [PATCH] drm/amd/powerplay: fix off-by-one array bounds check

2019-06-28 Thread Abramov, Slava
Acked-by: Slava Abramov 


From: amd-gfx  on behalf of Colin King 

Sent: Friday, June 28, 2019 10:24:02 AM
To: Rex Zhu; Quan, Evan; Deucher, Alexander; Koenig, Christian; Zhou, 
David(ChunMing); David Airlie; Daniel Vetter; amd-gfx@lists.freedesktop.org; 
dri-de...@lists.freedesktop.org
Cc: kernel-janit...@vger.kernel.org; linux-ker...@vger.kernel.org
Subject: [PATCH] drm/amd/powerplay: fix off-by-one array bounds check

From: Colin Ian King 

The array bounds check for index is currently off-by-one and should
be using >= rather than > on the upper bound. Fix this.

Addresses-Coverity: ("Out-of-bounds read")
Fixes: b3490673f905 ("drm/amd/powerplay: introduce the navi10 pptable 
implementation")
Signed-off-by: Colin Ian King 
---
 drivers/gpu/drm/amd/powerplay/navi10_ppt.c | 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)

diff --git a/drivers/gpu/drm/amd/powerplay/navi10_ppt.c 
b/drivers/gpu/drm/amd/powerplay/navi10_ppt.c
index 27e5c80..f678700 100644
--- a/drivers/gpu/drm/amd/powerplay/navi10_ppt.c
+++ b/drivers/gpu/drm/amd/powerplay/navi10_ppt.c
@@ -210,7 +210,7 @@ static int navi10_workload_map[] = {
 static int navi10_get_smu_msg_index(struct smu_context *smc, uint32_t index)
 {
 int val;
-   if (index > SMU_MSG_MAX_COUNT)
+   if (index >= SMU_MSG_MAX_COUNT)
 return -EINVAL;

 val = navi10_message_map[index];
--
2.7.4

___
amd-gfx mailing list
amd-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/amd-gfx
___
amd-gfx mailing list
amd-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/amd-gfx

[PATCH][next] drm/amd/powerplay: fix out of memory check on od8_settings

2019-06-28 Thread Colin King
From: Colin Ian King 

The null pointer check on od8_settings is currently the opposite of what
it is intended to do. Fix this by adding in the missing ! operator.

Addressed-Coverity: ("Resource leak")
Fixes: 0c83d32c565c ("drm/amd/powerplay: simplified od_settings for each asic")
Signed-off-by: Colin Ian King 
---
 drivers/gpu/drm/amd/powerplay/vega20_ppt.c | 3 +--
 1 file changed, 1 insertion(+), 2 deletions(-)

diff --git a/drivers/gpu/drm/amd/powerplay/vega20_ppt.c 
b/drivers/gpu/drm/amd/powerplay/vega20_ppt.c
index 0f14fe14ecd8..eb9e6b3a5265 100644
--- a/drivers/gpu/drm/amd/powerplay/vega20_ppt.c
+++ b/drivers/gpu/drm/amd/powerplay/vega20_ppt.c
@@ -1501,8 +1501,7 @@ static int vega20_set_default_od8_setttings(struct 
smu_context *smu)
return -EINVAL;
 
od8_settings = kzalloc(sizeof(struct vega20_od8_settings), GFP_KERNEL);
-
-   if (od8_settings)
+   if (!od8_settings)
return -ENOMEM;
 
smu->od_settings = (void *)od8_settings;
-- 
2.20.1



Re: [PATCH][next] drm/amdkfd: fix a missing break in a switch statement

2019-06-28 Thread Abramov, Slava
Acked-by: Slava Abramov 


From: amd-gfx  on behalf of Colin King 

Sent: Friday, June 28, 2019 10:54:43 AM
To: Cox, Philip; Oded Gabbay; Deucher, Alexander; Koenig, Christian; Zhou, 
David(ChunMing); David Airlie; Daniel Vetter; dri-de...@lists.freedesktop.org; 
amd-gfx@lists.freedesktop.org
Cc: kernel-janit...@vger.kernel.org; linux-ker...@vger.kernel.org
Subject: [PATCH][next] drm/amdkfd: fix a missing break in a switch statement

From: Colin Ian King 

Currently for the CHIP_RAVEN case there is a missing break
causing the code to fall through to the new CHIP_NAVI10 case.
Fix this by adding in the missing break statement.

Fixes: 14328aa58ce5 ("drm/amdkfd: Add navi10 support to amdkfd. (v3)")
Signed-off-by: Colin Ian King 
---
 drivers/gpu/drm/amd/amdkfd/kfd_crat.c | 1 +
 1 file changed, 1 insertion(+)

diff --git a/drivers/gpu/drm/amd/amdkfd/kfd_crat.c 
b/drivers/gpu/drm/amd/amdkfd/kfd_crat.c
index 792371442195..4e3fc284f6ac 100644
--- a/drivers/gpu/drm/amd/amdkfd/kfd_crat.c
+++ b/drivers/gpu/drm/amd/amdkfd/kfd_crat.c
@@ -668,6 +668,7 @@ static int kfd_fill_gpu_cache_info(struct kfd_dev *kdev,
 case CHIP_RAVEN:
 pcache_info = raven_cache_info;
 num_of_cache_types = ARRAY_SIZE(raven_cache_info);
+   break;
 case CHIP_NAVI10:
 pcache_info = navi10_cache_info;
 num_of_cache_types = ARRAY_SIZE(navi10_cache_info);
--
2.20.1

___
amd-gfx mailing list
amd-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/amd-gfx
___
amd-gfx mailing list
amd-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/amd-gfx

Re: [PATCH][next] drm/amdgpu: fix off-by-one comparison on a WARN_ON message

2019-06-28 Thread Abramov, Slava
Acked-by: Slava Abramov 


From: amd-gfx  on behalf of Colin King 

Sent: Friday, June 28, 2019 10:08:01 AM
To: Zhang, Hawking; Deucher, Alexander; Koenig, Christian; Zhou, 
David(ChunMing); David Airlie; Daniel Vetter; amd-gfx@lists.freedesktop.org; 
dri-de...@lists.freedesktop.org
Cc: kernel-janit...@vger.kernel.org; linux-ker...@vger.kernel.org
Subject: [PATCH][next] drm/amdgpu: fix off-by-one comparison on a WARN_ON 
message

From: Colin Ian King 

The WARN_ON is currently throwing a warning when i is 65 or higher which
is off by one. It should be 64 or higher (64 queues from 0..63 inclusive),
so fix this off-by-one comparison.

Fixes: 849aca9f9c03 ("drm/amdgpu: Move common code to amdgpu_gfx.c")
Signed-off-by: Colin Ian King 
---
 drivers/gpu/drm/amd/amdgpu/amdgpu_gfx.c | 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)

diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_gfx.c 
b/drivers/gpu/drm/amd/amdgpu/amdgpu_gfx.c
index 74066e1466f7..c8d106c59e27 100644
--- a/drivers/gpu/drm/amd/amdgpu/amdgpu_gfx.c
+++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_gfx.c
@@ -501,7 +501,7 @@ int amdgpu_gfx_enable_kcq(struct amdgpu_device *adev)
 /* This situation may be hit in the future if a new HW
  * generation exposes more than 64 queues. If so, the
  * definition of queue_mask needs updating */
-   if (WARN_ON(i > (sizeof(queue_mask)*8))) {
+   if (WARN_ON(i >= (sizeof(queue_mask)*8))) {
 DRM_ERROR("Invalid KCQ enabled: %d\n", i);
 break;
 }
--
2.20.1

___
amd-gfx mailing list
amd-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/amd-gfx
___
amd-gfx mailing list
amd-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/amd-gfx

Re: [PATCH] drm/amdgpu: fix a missing break in a switch statement

2019-06-28 Thread Abramov, Slava
Acked-by: Slava Abramov 


From: amd-gfx  on behalf of Colin King 

Sent: Friday, June 28, 2019 10:33:20 AM
To: Zhang, Hawking; Deucher, Alexander; Koenig, Christian; Zhou, 
David(ChunMing); David Airlie; Daniel Vetter; amd-gfx@lists.freedesktop.org; 
dri-de...@lists.freedesktop.org
Cc: kernel-janit...@vger.kernel.org; linux-ker...@vger.kernel.org
Subject: [PATCH] drm/amdgpu: fix a missing break in a switch statement

From: Colin Ian King 

Currently for the AMDGPU_IRQ_STATE_DISABLE there is a missing break
causing the code to fall through to the AMDGPU_IRQ_STATE_ENABLE case.
Fix this by adding in the missing break statement.

Addresses-Coverity: ("Missing break in switch")
Fixes: a644d85a5cd4 ("drm/amdgpu: add gfx v10 implementation (v10)")
Signed-off-by: Colin Ian King 
---
 drivers/gpu/drm/amd/amdgpu/gfx_v10_0.c | 1 +
 1 file changed, 1 insertion(+)

diff --git a/drivers/gpu/drm/amd/amdgpu/gfx_v10_0.c 
b/drivers/gpu/drm/amd/amdgpu/gfx_v10_0.c
index 2932ade7dbd0..c165200361b2 100644
--- a/drivers/gpu/drm/amd/amdgpu/gfx_v10_0.c
+++ b/drivers/gpu/drm/amd/amdgpu/gfx_v10_0.c
@@ -4608,6 +4608,7 @@ gfx_v10_0_set_gfx_eop_interrupt_state(struct 
amdgpu_device *adev,
 cp_int_cntl = REG_SET_FIELD(cp_int_cntl, CP_INT_CNTL_RING0,
 TIME_STAMP_INT_ENABLE, 0);
 WREG32(cp_int_cntl_reg, cp_int_cntl);
+   break;
 case AMDGPU_IRQ_STATE_ENABLE:
 cp_int_cntl = RREG32(cp_int_cntl_reg);
 cp_int_cntl = REG_SET_FIELD(cp_int_cntl, CP_INT_CNTL_RING0,
--
2.20.1

___
amd-gfx mailing list
amd-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/amd-gfx
___
amd-gfx mailing list
amd-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/amd-gfx

[PATCH][next] drm/amdgpu/mes10.1: fix duplicated assignment to adev->mes.ucode_fw_version

2019-06-28 Thread Colin King
From: Colin Ian King 

Currently adev->mes.ucode_fw_version is being assigned twice with
different values. This looks like a cut-n-paste error and instead
the second assignment should be adev->mes.data_fw_version. Fix
this.

Addresses-Coverity: ("Unused value")
Fixes: 298d05460cc4 ("drm/amdgpu/mes10.1: load mes firmware file to CPU buffer")
Signed-off-by: Colin Ian King 
---
 drivers/gpu/drm/amd/amdgpu/mes_v10_1.c | 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)

diff --git a/drivers/gpu/drm/amd/amdgpu/mes_v10_1.c 
b/drivers/gpu/drm/amd/amdgpu/mes_v10_1.c
index 29fab7984855..2a27f0b30bb5 100644
--- a/drivers/gpu/drm/amd/amdgpu/mes_v10_1.c
+++ b/drivers/gpu/drm/amd/amdgpu/mes_v10_1.c
@@ -91,7 +91,7 @@ static int mes_v10_1_init_microcode(struct amdgpu_device 
*adev)
 
mes_hdr = (const struct mes_firmware_header_v1_0 *)adev->mes.fw->data;
adev->mes.ucode_fw_version = le32_to_cpu(mes_hdr->mes_ucode_version);
-   adev->mes.ucode_fw_version =
+   adev->mes.data_fw_version =
le32_to_cpu(mes_hdr->mes_ucode_data_version);
adev->mes.uc_start_addr =
le32_to_cpu(mes_hdr->mes_uc_start_addr_lo) |
-- 
2.20.1



[PATCH] drm/amdgpu: fix a missing break in a switch statement

2019-06-28 Thread Colin King
From: Colin Ian King 

Currently for the AMDGPU_IRQ_STATE_DISABLE there is a missing break
causing the code to fall through to the AMDGPU_IRQ_STATE_ENABLE case.
Fix this by adding in the missing break statement.

Addresses-Coverity: ("Missing break in switch")
Fixes: a644d85a5cd4 ("drm/amdgpu: add gfx v10 implementation (v10)")
Signed-off-by: Colin Ian King 
---
 drivers/gpu/drm/amd/amdgpu/gfx_v10_0.c | 1 +
 1 file changed, 1 insertion(+)

diff --git a/drivers/gpu/drm/amd/amdgpu/gfx_v10_0.c 
b/drivers/gpu/drm/amd/amdgpu/gfx_v10_0.c
index 2932ade7dbd0..c165200361b2 100644
--- a/drivers/gpu/drm/amd/amdgpu/gfx_v10_0.c
+++ b/drivers/gpu/drm/amd/amdgpu/gfx_v10_0.c
@@ -4608,6 +4608,7 @@ gfx_v10_0_set_gfx_eop_interrupt_state(struct 
amdgpu_device *adev,
cp_int_cntl = REG_SET_FIELD(cp_int_cntl, CP_INT_CNTL_RING0,
TIME_STAMP_INT_ENABLE, 0);
WREG32(cp_int_cntl_reg, cp_int_cntl);
+   break;
case AMDGPU_IRQ_STATE_ENABLE:
cp_int_cntl = RREG32(cp_int_cntl_reg);
cp_int_cntl = REG_SET_FIELD(cp_int_cntl, CP_INT_CNTL_RING0,
-- 
2.20.1

___
amd-gfx mailing list
amd-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/amd-gfx

[PATCH][next] drm/amdkfd: fix a missing break in a switch statement

2019-06-28 Thread Colin King
From: Colin Ian King 

Currently for the CHIP_RAVEN case there is a missing break
causing the code to fall through to the new CHIP_NAVI10 case.
Fix this by adding in the missing break statement.

Fixes: 14328aa58ce5 ("drm/amdkfd: Add navi10 support to amdkfd. (v3)")
Signed-off-by: Colin Ian King 
---
 drivers/gpu/drm/amd/amdkfd/kfd_crat.c | 1 +
 1 file changed, 1 insertion(+)

diff --git a/drivers/gpu/drm/amd/amdkfd/kfd_crat.c 
b/drivers/gpu/drm/amd/amdkfd/kfd_crat.c
index 792371442195..4e3fc284f6ac 100644
--- a/drivers/gpu/drm/amd/amdkfd/kfd_crat.c
+++ b/drivers/gpu/drm/amd/amdkfd/kfd_crat.c
@@ -668,6 +668,7 @@ static int kfd_fill_gpu_cache_info(struct kfd_dev *kdev,
case CHIP_RAVEN:
pcache_info = raven_cache_info;
num_of_cache_types = ARRAY_SIZE(raven_cache_info);
+   break;
case CHIP_NAVI10:
pcache_info = navi10_cache_info;
num_of_cache_types = ARRAY_SIZE(navi10_cache_info);
-- 
2.20.1



[PATCH] drm/amd/powerplay: fix incorrect assignments to mclk_mask and soc_mask

2019-06-28 Thread Colin King
From: Colin Ian King 

There are null pointer checks on mlck_mask and soc_mask however the
sclk_mask is being used in assignments in what looks to be a cut-n-paste
coding error. Fix this by using the correct pointers in the assignments.

Addresses-Coverity: ("Dereference after null check")
Fixes: 2d9fb9b06643 ("drm/amd/powerplay: add function get_profiling_clk_mask 
for navi10")
Signed-off-by: Colin Ian King 
---
 drivers/gpu/drm/amd/powerplay/navi10_ppt.c | 4 ++--
 1 file changed, 2 insertions(+), 2 deletions(-)

diff --git a/drivers/gpu/drm/amd/powerplay/navi10_ppt.c 
b/drivers/gpu/drm/amd/powerplay/navi10_ppt.c
index 27e5c80..ac151da 100644
--- a/drivers/gpu/drm/amd/powerplay/navi10_ppt.c
+++ b/drivers/gpu/drm/amd/powerplay/navi10_ppt.c
@@ -1134,14 +1134,14 @@ static int navi10_get_profiling_clk_mask(struct 
smu_context *smu,
ret = smu_get_dpm_level_count(smu, SMU_MCLK, 
_count);
if (ret)
return ret;
-   *sclk_mask = level_count - 1;
+   *mclk_mask = level_count - 1;
}
 
if(soc_mask) {
ret = smu_get_dpm_level_count(smu, SMU_SOCCLK, 
_count);
if (ret)
return ret;
-   *sclk_mask = level_count - 1;
+   *soc_mask = level_count - 1;
}
}
 
-- 
2.7.4



Re: [PATCH 2/2] dma-buf: cleanup shared fence removal

2019-06-28 Thread Daniel Vetter
On Fri, Jun 28, 2019 at 12:24 PM Koenig, Christian
 wrote:
>
> Am 28.06.19 um 11:41 schrieb Daniel Vetter:
> > On Fri, Jun 28, 2019 at 10:40 AM Christian König
> >  wrote:
> >> Am 28.06.19 um 09:30 schrieb Daniel Vetter:
> >>> On Fri, Jun 28, 2019 at 8:32 AM Koenig, Christian
> >>>  wrote:
>  Am 27.06.19 um 21:57 schrieb Daniel Vetter:
> > [SNIP]
> >>> Well yeah you have to wait for outstanding rendering. Everyone does
> >>> that. The problem is that ->eviction_fence does not represent
> >>> rendering, it's a very contrived way to implement the ->notify_move
> >>> callback.
> >>>
> >>> For real fences you have to wait for rendering to complete, putting
> >>> aside corner cases like destroying an entire context with all its
> >>> pending rendering. Not really something we should optimize for I
> >>> think.
> >> No, real fences I don't need to wait for the rendering to complete either.
> >>
> >> If userspace said that this per process resource is dead and can be
> >> removed, we don't need to wait for it to become idle.
> > But why did you attach a fence in the first place?
>
> Because so that others can still wait for it.
>
> See it is perfectly valid to export this buffer object to let's say the
> Intel driver and in this case I don't get a move notification.

If you share with intel then the buffer is pinned. You can't move the
thing anymore.

> And I really don't want to add another workaround where I add the fences
> only when the BO is exported or stuff like that.

You don't need to add a fence for that case because you can't move the
buffer anyway.

> >> [SNIP]
> >> As far as I can see it is perfectly valid to remove all fences from this
> >> process as soon as the page tables are up to date.
> > I'm not objecting the design, but just the implementation.
>
> Ok, then we can at least agree on that.
>
> >>> So with the magic amdkfd_eviction fence I agree this makes sense. The
> >>> problem I'm having here is that the magic eviction fence itself
> >>> doesn't make sense. What I expect will happen (in terms of the new
> >>> dynamic dma-buf stuff, I don't have the ttm-ism ready to explain it in
> >>> those concepts).
> >>>
> >>> - when you submit command buffers, you _dont_ attach fences to all
> >>> involved buffers
> >> That's not going to work because then the memory management then thinks
> >> that the buffer is immediately movable, which it isn't,
> > I guess we need to fix that then. I pretty much assumed that
> > ->notify_move could add whatever fences you might want to add. Which
> > would very neatly allow us to solve this problem here, instead of
> > coming up with fake fences and fun stuff like that.
>
> Adding the fence later on is not a solution because we need something
> which beforehand can check if a buffer is movable or not.
>
> In the case of a move_notify the decision to move it is already done and
> you can't say oh sorry I have to evict my process and reprogram the
> hardware or whatever.
>
> Especially when you do this in an OOM situation.

Why? I mean when the fence for a CS is there already, it might also
still hang out in the scheduler, or be blocked on a fence from another
driver, or anything like that. I don't see a conceptual difference.
Plus with dynamic dma-buf the entire point is that an attached fences
does _not_ mean the buffer is permanently pinned, but can be moved if
you sync correctly. Might need a bit of tuning or a flag to indicate
that some buffers should alwasy considered to be busy, and that you
shouldn't start evicting those. But that's kinda a detail.

From a very high level there's really no difference betwen
->notify_move and the eviction_fence. Both give you a callback when
someone else needs to move the buffer, that's all. The only difference
is that the eviction_fence thing jumbles the callback and the fence
into one, by preattaching a fence just in case. But again from a
conceptual pov it doesn't matter whether the fence is always hanging
around, or whether you just attach it when ->notify_move is called.

> > If ->notify_move can't add fences, then the you have to attach the
> > right fences to all of the bo, all the time. And a CS model where
> > userspace just updates the working set and keeps submitting stuff,
> > while submitting new batches. And the kernel preempts the entire
> > context if memory needs to be evicted, and re-runs it only once the
> > working set is available again.
> >
> > No the eviction_fence is not a good design solution for this, and imo
> > you should have rejected that.
>
> Actually I was the one who suggested that because the alternatives
> doesn't sounded like they would work.
>
> [SNIP]
> >> See ttm_bo_individualize_resv() as well, here we do something similar
> >> for GFX what the KFD does when it releases memory.
> > Uh wut. I guess more funky tricks I need to first learn about, but
> > yeah doesn't make much sense ot me right now.
> >
> >> E.g. for per process resources we copy over the current fences into an
> 

Re: [PATCH 1/5] drm/amdgpu: allow direct submission in the VM backends

2019-06-28 Thread Chunming Zhou

在 2019/6/28 20:18, Christian König 写道:
> This allows us to update page tables directly while in a page fault.
>
> Signed-off-by: Christian König 
> ---
>   drivers/gpu/drm/amd/amdgpu/amdgpu_vm.h  |  5 
>   drivers/gpu/drm/amd/amdgpu/amdgpu_vm_cpu.c  |  4 +++
>   drivers/gpu/drm/amd/amdgpu/amdgpu_vm_sdma.c | 29 +
>   3 files changed, 27 insertions(+), 11 deletions(-)
>
> diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_vm.h 
> b/drivers/gpu/drm/amd/amdgpu/amdgpu_vm.h
> index 489a162ca620..5941accea061 100644
> --- a/drivers/gpu/drm/amd/amdgpu/amdgpu_vm.h
> +++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_vm.h
> @@ -197,6 +197,11 @@ struct amdgpu_vm_update_params {
>*/
>   struct amdgpu_vm *vm;
>   
> + /**
> +  * @direct: if changes should be made directly
> +  */
> + bool direct;
> +
>   /**
>* @pages_addr:
>*
> diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_vm_cpu.c 
> b/drivers/gpu/drm/amd/amdgpu/amdgpu_vm_cpu.c
> index 5222d165abfc..f94e4896079c 100644
> --- a/drivers/gpu/drm/amd/amdgpu/amdgpu_vm_cpu.c
> +++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_vm_cpu.c
> @@ -49,6 +49,10 @@ static int amdgpu_vm_cpu_prepare(struct 
> amdgpu_vm_update_params *p, void *owner,
>   {
>   int r;
>   
> + /* Don't wait for anything during page fault */
> + if (p->direct)
> + return 0;
> +
>   /* Wait for PT BOs to be idle. PTs share the same resv. object
>* as the root PD BO
>*/
> diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_vm_sdma.c 
> b/drivers/gpu/drm/amd/amdgpu/amdgpu_vm_sdma.c
> index ddd181f5ed37..891d597063cb 100644
> --- a/drivers/gpu/drm/amd/amdgpu/amdgpu_vm_sdma.c
> +++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_vm_sdma.c
> @@ -68,17 +68,17 @@ static int amdgpu_vm_sdma_prepare(struct 
> amdgpu_vm_update_params *p,
>   if (r)
>   return r;
>   
> - r = amdgpu_sync_fence(p->adev, >job->sync, exclusive, false);
> - if (r)
> - return r;
> + p->num_dw_left = ndw;
> +
> + if (p->direct)
> + return 0;
>   
> - r = amdgpu_sync_resv(p->adev, >job->sync, root->tbo.resv,
> -  owner, false);
> + r = amdgpu_sync_fence(p->adev, >job->sync, exclusive, false);
>   if (r)
>   return r;
>   
> - p->num_dw_left = ndw;
> - return 0;
> + return amdgpu_sync_resv(p->adev, >job->sync, root->tbo.resv,
> + owner, false);
>   }
>   
>   /**
> @@ -99,13 +99,21 @@ static int amdgpu_vm_sdma_commit(struct 
> amdgpu_vm_update_params *p,
>   struct dma_fence *f;
>   int r;
>   
> - ring = container_of(p->vm->entity.rq->sched, struct amdgpu_ring, sched);
> + if (p->direct)
> + ring = p->adev->vm_manager.page_fault;
> + else
> + ring = container_of(p->vm->entity.rq->sched,
> + struct amdgpu_ring, sched);
>   
>   WARN_ON(ib->length_dw == 0);
>   amdgpu_ring_pad_ib(ring, ib);
>   WARN_ON(ib->length_dw > p->num_dw_left);
> - r = amdgpu_job_submit(p->job, >vm->entity,
> -   AMDGPU_FENCE_OWNER_VM, );
> +
> + if (p->direct)
> + r = amdgpu_job_submit_direct(p->job, ring, );

When we use direct submission after intialization, we need to take care 
of ring race condision, don't we? Am I missing anything?


-David

> + else
> + r = amdgpu_job_submit(p->job, >vm->entity,
> +   AMDGPU_FENCE_OWNER_VM, );
>   if (r)
>   goto error;
>   
> @@ -120,7 +128,6 @@ static int amdgpu_vm_sdma_commit(struct 
> amdgpu_vm_update_params *p,
>   return r;
>   }
>   
> -
>   /**
>* amdgpu_vm_sdma_copy_ptes - copy the PTEs from mapping
>*
___
amd-gfx mailing list
amd-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/amd-gfx

[PATCH] drm/amd/powerplay: fix off-by-one array bounds check

2019-06-28 Thread Colin King
From: Colin Ian King 

The array bounds check for index is currently off-by-one and should
be using >= rather than > on the upper bound. Fix this.

Addresses-Coverity: ("Out-of-bounds read")
Fixes: b3490673f905 ("drm/amd/powerplay: introduce the navi10 pptable 
implementation")
Signed-off-by: Colin Ian King 
---
 drivers/gpu/drm/amd/powerplay/navi10_ppt.c | 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)

diff --git a/drivers/gpu/drm/amd/powerplay/navi10_ppt.c 
b/drivers/gpu/drm/amd/powerplay/navi10_ppt.c
index 27e5c80..f678700 100644
--- a/drivers/gpu/drm/amd/powerplay/navi10_ppt.c
+++ b/drivers/gpu/drm/amd/powerplay/navi10_ppt.c
@@ -210,7 +210,7 @@ static int navi10_workload_map[] = {
 static int navi10_get_smu_msg_index(struct smu_context *smc, uint32_t index)
 {
int val;
-   if (index > SMU_MSG_MAX_COUNT)
+   if (index >= SMU_MSG_MAX_COUNT)
return -EINVAL;
 
val = navi10_message_map[index];
-- 
2.7.4



[PATCH][next] drm/amdgpu: fix off-by-one comparison on a WARN_ON message

2019-06-28 Thread Colin King
From: Colin Ian King 

The WARN_ON is currently throwing a warning when i is 65 or higher which
is off by one. It should be 64 or higher (64 queues from 0..63 inclusive),
so fix this off-by-one comparison.

Fixes: 849aca9f9c03 ("drm/amdgpu: Move common code to amdgpu_gfx.c")
Signed-off-by: Colin Ian King 
---
 drivers/gpu/drm/amd/amdgpu/amdgpu_gfx.c | 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)

diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_gfx.c 
b/drivers/gpu/drm/amd/amdgpu/amdgpu_gfx.c
index 74066e1466f7..c8d106c59e27 100644
--- a/drivers/gpu/drm/amd/amdgpu/amdgpu_gfx.c
+++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_gfx.c
@@ -501,7 +501,7 @@ int amdgpu_gfx_enable_kcq(struct amdgpu_device *adev)
/* This situation may be hit in the future if a new HW
 * generation exposes more than 64 queues. If so, the
 * definition of queue_mask needs updating */
-   if (WARN_ON(i > (sizeof(queue_mask)*8))) {
+   if (WARN_ON(i >= (sizeof(queue_mask)*8))) {
DRM_ERROR("Invalid KCQ enabled: %d\n", i);
break;
}
-- 
2.20.1



Re: [PATCH] drm/amdgpu: Don't skip display settings in hwmgr_resume()

2019-06-28 Thread Alex Deucher
On Thu, Jun 20, 2019 at 7:22 PM Lyude Paul  wrote:
>
> I'm not entirely sure why this is, but for some reason:
>
> 921935dc6404 ("drm/amd/powerplay: enforce display related settings only on 
> needed")
>
> Breaks runtime PM resume on the Radeon PRO WX 3100 (Lexa) in one the
> pre-production laptops I have. The issue manifests as the following
> messages in dmesg:
>
> [drm] UVD and UVD ENC initialized successfully.
> amdgpu :3b:00.0: [drm:amdgpu_ring_test_helper [amdgpu]] *ERROR* ring vce1 
> test failed (-110)
> [drm:amdgpu_device_ip_resume_phase2 [amdgpu]] *ERROR* resume of IP block 
>  failed -110
> [drm:amdgpu_device_resume [amdgpu]] *ERROR* amdgpu_device_ip_resume failed 
> (-110).
>
> And happens after about 6-10 runtime PM suspend/resume cycles (sometimes
> sooner, if you're lucky!). Unfortunately I can't seem to pin down
> precisely which part in psm_adjust_power_state_dynamic that is causing
> the issue, but not skipping the display setting setup seems to fix it.
> Hopefully if there is a better fix for this, this patch will spark
> discussion around it.
>
> Fixes: 921935dc6404 ("drm/amd/powerplay: enforce display related settings 
> only on needed")
> Cc: Evan Quan 
> Cc: Alex Deucher 
> Cc: Huang Rui 
> Cc: Rex Zhu 
> Cc: Likun Gao 
> Cc:  # v5.1+
> Signed-off-by: Lyude Paul 

I've gone ahead and applied this.

Thanks,

Alex

> ---
>  drivers/gpu/drm/amd/powerplay/hwmgr/hwmgr.c | 2 +-
>  1 file changed, 1 insertion(+), 1 deletion(-)
>
> diff --git a/drivers/gpu/drm/amd/powerplay/hwmgr/hwmgr.c 
> b/drivers/gpu/drm/amd/powerplay/hwmgr/hwmgr.c
> index 6cd6497c6fc2..0e1b2d930816 100644
> --- a/drivers/gpu/drm/amd/powerplay/hwmgr/hwmgr.c
> +++ b/drivers/gpu/drm/amd/powerplay/hwmgr/hwmgr.c
> @@ -325,7 +325,7 @@ int hwmgr_resume(struct pp_hwmgr *hwmgr)
> if (ret)
> return ret;
>
> -   ret = psm_adjust_power_state_dynamic(hwmgr, true, NULL);
> +   ret = psm_adjust_power_state_dynamic(hwmgr, false, NULL);
>
> return ret;
>  }
> --
> 2.21.0
>
> ___
> amd-gfx mailing list
> amd-gfx@lists.freedesktop.org
> https://lists.freedesktop.org/mailman/listinfo/amd-gfx


Re: [PATCH] drm/amd/powerplay: use hardware fan control if no powerplay fan table

2019-06-28 Thread Abramov, Slava
Tested-by: Slava Abramov 

Acked-by: Slava Abramov 


From: amd-gfx  on behalf of Evan Quan 

Sent: Thursday, June 27, 2019 11:15:29 PM
To: amd-gfx@lists.freedesktop.org
Cc: Quan, Evan
Subject: [PATCH] drm/amd/powerplay: use hardware fan control if no powerplay 
fan table

Use SMC default fan table if no external powerplay fan table.

Change-Id: Icd7467a7fc5287a92945ba0fcc19699192b1683a
Signed-off-by: Evan Quan 
---
 drivers/gpu/drm/amd/powerplay/hwmgr/process_pptables_v1_0.c | 4 +++-
 drivers/gpu/drm/amd/powerplay/inc/hwmgr.h   | 1 +
 drivers/gpu/drm/amd/powerplay/smumgr/polaris10_smumgr.c | 4 
 3 files changed, 8 insertions(+), 1 deletion(-)

diff --git a/drivers/gpu/drm/amd/powerplay/hwmgr/process_pptables_v1_0.c 
b/drivers/gpu/drm/amd/powerplay/hwmgr/process_pptables_v1_0.c
index ae64ff7153d6..1cd5a8b5cdc1 100644
--- a/drivers/gpu/drm/amd/powerplay/hwmgr/process_pptables_v1_0.c
+++ b/drivers/gpu/drm/amd/powerplay/hwmgr/process_pptables_v1_0.c
@@ -916,8 +916,10 @@ static int init_thermal_controller(
 PHM_PlatformCaps_ThermalController
   );

-   if (0 == powerplay_table->usFanTableOffset)
+   if (0 == powerplay_table->usFanTableOffset) {
+   hwmgr->thermal_controller.use_hw_fan_control = 1;
 return 0;
+   }

 fan_table = (const PPTable_Generic_SubTable_Header *)
 (((unsigned long)powerplay_table) +
diff --git a/drivers/gpu/drm/amd/powerplay/inc/hwmgr.h 
b/drivers/gpu/drm/amd/powerplay/inc/hwmgr.h
index 2f186fcbdfc5..ec53bf24396e 100644
--- a/drivers/gpu/drm/amd/powerplay/inc/hwmgr.h
+++ b/drivers/gpu/drm/amd/powerplay/inc/hwmgr.h
@@ -697,6 +697,7 @@ struct pp_thermal_controller_info {
 uint8_t ucType;
 uint8_t ucI2cLine;
 uint8_t ucI2cAddress;
+   uint8_t use_hw_fan_control;
 struct pp_fan_info fanInfo;
 struct pp_advance_fan_control_parameters advanceFanControlParameters;
 };
diff --git a/drivers/gpu/drm/amd/powerplay/smumgr/polaris10_smumgr.c 
b/drivers/gpu/drm/amd/powerplay/smumgr/polaris10_smumgr.c
index fbac2d3326b5..a1a9f6196009 100644
--- a/drivers/gpu/drm/amd/powerplay/smumgr/polaris10_smumgr.c
+++ b/drivers/gpu/drm/amd/powerplay/smumgr/polaris10_smumgr.c
@@ -2092,6 +2092,10 @@ static int polaris10_thermal_setup_fan_table(struct 
pp_hwmgr *hwmgr)
 return 0;
 }

+   /* use hardware fan control */
+   if (hwmgr->thermal_controller.use_hw_fan_control)
+   return 0;
+
 tmp64 = hwmgr->thermal_controller.advanceFanControlParameters.
 usPWMMin * duty100;
 do_div(tmp64, 1);
--
2.21.0

___
amd-gfx mailing list
amd-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/amd-gfx
___
amd-gfx mailing list
amd-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/amd-gfx

Re: [PATCH] drm/amd/powerplay: use hardware fan control if no powerplay fan table

2019-06-28 Thread Alex Deucher
On Thu, Jun 27, 2019 at 11:15 PM Evan Quan  wrote:
>
> Use SMC default fan table if no external powerplay fan table.

Maybe add a line saying you may get errors if the table is empty and
you try and use it.  With that,
Reviewed-by: Alex Deucher 

>
> Change-Id: Icd7467a7fc5287a92945ba0fcc19699192b1683a
> Signed-off-by: Evan Quan 
> ---
>  drivers/gpu/drm/amd/powerplay/hwmgr/process_pptables_v1_0.c | 4 +++-
>  drivers/gpu/drm/amd/powerplay/inc/hwmgr.h   | 1 +
>  drivers/gpu/drm/amd/powerplay/smumgr/polaris10_smumgr.c | 4 
>  3 files changed, 8 insertions(+), 1 deletion(-)
>
> diff --git a/drivers/gpu/drm/amd/powerplay/hwmgr/process_pptables_v1_0.c 
> b/drivers/gpu/drm/amd/powerplay/hwmgr/process_pptables_v1_0.c
> index ae64ff7153d6..1cd5a8b5cdc1 100644
> --- a/drivers/gpu/drm/amd/powerplay/hwmgr/process_pptables_v1_0.c
> +++ b/drivers/gpu/drm/amd/powerplay/hwmgr/process_pptables_v1_0.c
> @@ -916,8 +916,10 @@ static int init_thermal_controller(
> PHM_PlatformCaps_ThermalController
>   );
>
> -   if (0 == powerplay_table->usFanTableOffset)
> +   if (0 == powerplay_table->usFanTableOffset) {
> +   hwmgr->thermal_controller.use_hw_fan_control = 1;
> return 0;
> +   }
>
> fan_table = (const PPTable_Generic_SubTable_Header *)
> (((unsigned long)powerplay_table) +
> diff --git a/drivers/gpu/drm/amd/powerplay/inc/hwmgr.h 
> b/drivers/gpu/drm/amd/powerplay/inc/hwmgr.h
> index 2f186fcbdfc5..ec53bf24396e 100644
> --- a/drivers/gpu/drm/amd/powerplay/inc/hwmgr.h
> +++ b/drivers/gpu/drm/amd/powerplay/inc/hwmgr.h
> @@ -697,6 +697,7 @@ struct pp_thermal_controller_info {
> uint8_t ucType;
> uint8_t ucI2cLine;
> uint8_t ucI2cAddress;
> +   uint8_t use_hw_fan_control;
> struct pp_fan_info fanInfo;
> struct pp_advance_fan_control_parameters advanceFanControlParameters;
>  };
> diff --git a/drivers/gpu/drm/amd/powerplay/smumgr/polaris10_smumgr.c 
> b/drivers/gpu/drm/amd/powerplay/smumgr/polaris10_smumgr.c
> index fbac2d3326b5..a1a9f6196009 100644
> --- a/drivers/gpu/drm/amd/powerplay/smumgr/polaris10_smumgr.c
> +++ b/drivers/gpu/drm/amd/powerplay/smumgr/polaris10_smumgr.c
> @@ -2092,6 +2092,10 @@ static int polaris10_thermal_setup_fan_table(struct 
> pp_hwmgr *hwmgr)
> return 0;
> }
>
> +   /* use hardware fan control */
> +   if (hwmgr->thermal_controller.use_hw_fan_control)
> +   return 0;
> +
> tmp64 = hwmgr->thermal_controller.advanceFanControlParameters.
> usPWMMin * duty100;
> do_div(tmp64, 1);
> --
> 2.21.0
>
> ___
> amd-gfx mailing list
> amd-gfx@lists.freedesktop.org
> https://lists.freedesktop.org/mailman/listinfo/amd-gfx
___
amd-gfx mailing list
amd-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/amd-gfx

Re: [PATCH] drm/amdgpu: fix MGPU fan boost enablement for XGMI reset

2019-06-28 Thread Deucher, Alexander
Reviewed-by: Alex Deucher 


From: amd-gfx  on behalf of Evan Quan 

Sent: Thursday, June 27, 2019 11:31 PM
To: amd-gfx@lists.freedesktop.org
Cc: Quan, Evan
Subject: [PATCH] drm/amdgpu: fix MGPU fan boost enablement for XGMI reset

MGPU fan boost feature should not be enabled until all the
devices from the same hive are all back from reset.

Change-Id: I03a69434ff28f4eac209bd91320dde8a238a33cf
Signed-off-by: Evan Quan 
---
 drivers/gpu/drm/amd/amdgpu/amdgpu.h|  4 
 drivers/gpu/drm/amd/amdgpu/amdgpu_device.c | 13 +
 drivers/gpu/drm/amd/amdgpu/amdgpu_kms.c|  4 ++--
 3 files changed, 19 insertions(+), 2 deletions(-)

diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu.h 
b/drivers/gpu/drm/amd/amdgpu/amdgpu.h
index 7541e1b076b0..9efa0423c242 100644
--- a/drivers/gpu/drm/amd/amdgpu/amdgpu.h
+++ b/drivers/gpu/drm/amd/amdgpu/amdgpu.h
@@ -1219,6 +1219,10 @@ int amdgpu_dm_display_resume(struct amdgpu_device *adev 
);
 static inline int amdgpu_dm_display_resume(struct amdgpu_device *adev) { 
return 0; }
 #endif

+
+void amdgpu_register_gpu_instance(struct amdgpu_device *adev);
+void amdgpu_unregister_gpu_instance(struct amdgpu_device *adev);
+
 #include "amdgpu_object.h"

 /* used by df_v3_6.c and amdgpu_pmu.c */
diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_device.c 
b/drivers/gpu/drm/amd/amdgpu/amdgpu_device.c
index a2d234c07fc4..f39eb7b37c8b 100644
--- a/drivers/gpu/drm/amd/amdgpu/amdgpu_device.c
+++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_device.c
@@ -3558,6 +3558,12 @@ static int amdgpu_do_asic_reset(struct amdgpu_hive_info 
*hive,
 if (vram_lost)
 
amdgpu_device_fill_reset_magic(tmp_adev);

+   /*
+* Add this ASIC as tracked as reset was already
+* complete successfully.
+*/
+   amdgpu_register_gpu_instance(tmp_adev);
+
 r = amdgpu_device_ip_late_init(tmp_adev);
 if (r)
 goto out;
@@ -3692,6 +3698,13 @@ int amdgpu_device_gpu_recover(struct amdgpu_device *adev,
 device_list_handle = _list;
 }

+   /*
+* Mark these ASICs to be reseted as untracked first
+* And add them back after reset completed
+*/
+   list_for_each_entry(tmp_adev, device_list_handle, gmc.xgmi.head)
+   amdgpu_unregister_gpu_instance(tmp_adev);
+
 /* block all schedulers and reset given job's ring */
 list_for_each_entry(tmp_adev, device_list_handle, gmc.xgmi.head) {
 for (i = 0; i < AMDGPU_MAX_RINGS; ++i) {
diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_kms.c 
b/drivers/gpu/drm/amd/amdgpu/amdgpu_kms.c
index ed051fdb509f..e2c9d8d31ed8 100644
--- a/drivers/gpu/drm/amd/amdgpu/amdgpu_kms.c
+++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_kms.c
@@ -41,7 +41,7 @@
 #include "amdgpu_display.h"
 #include "amdgpu_ras.h"

-static void amdgpu_unregister_gpu_instance(struct amdgpu_device *adev)
+void amdgpu_unregister_gpu_instance(struct amdgpu_device *adev)
 {
 struct amdgpu_gpu_instance *gpu_instance;
 int i;
@@ -102,7 +102,7 @@ void amdgpu_driver_unload_kms(struct drm_device *dev)
 dev->dev_private = NULL;
 }

-static void amdgpu_register_gpu_instance(struct amdgpu_device *adev)
+void amdgpu_register_gpu_instance(struct amdgpu_device *adev)
 {
 struct amdgpu_gpu_instance *gpu_instance;

--
2.21.0

___
amd-gfx mailing list
amd-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/amd-gfx
___
amd-gfx mailing list
amd-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/amd-gfx

[PATCH 2/5] drm/amdgpu: allow direct submission of PDE updates.

2019-06-28 Thread Christian König
For handling PDE updates directly in the fault handler.

Signed-off-by: Christian König 
---
 drivers/gpu/drm/amd/amdgpu/amdgpu_amdkfd_gpuvm.c | 2 +-
 drivers/gpu/drm/amd/amdgpu/amdgpu_cs.c   | 2 +-
 drivers/gpu/drm/amd/amdgpu/amdgpu_gem.c  | 2 +-
 drivers/gpu/drm/amd/amdgpu/amdgpu_vm.c   | 8 +---
 drivers/gpu/drm/amd/amdgpu/amdgpu_vm.h   | 4 ++--
 5 files changed, 10 insertions(+), 8 deletions(-)

diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_amdkfd_gpuvm.c 
b/drivers/gpu/drm/amd/amdgpu/amdgpu_amdkfd_gpuvm.c
index 10abae398e51..9954f4ec44b8 100644
--- a/drivers/gpu/drm/amd/amdgpu/amdgpu_amdkfd_gpuvm.c
+++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_amdkfd_gpuvm.c
@@ -348,7 +348,7 @@ static int vm_update_pds(struct amdgpu_vm *vm, struct 
amdgpu_sync *sync)
struct amdgpu_device *adev = amdgpu_ttm_adev(pd->tbo.bdev);
int ret;
 
-   ret = amdgpu_vm_update_directories(adev, vm);
+   ret = amdgpu_vm_update_pdes(adev, vm, false);
if (ret)
return ret;
 
diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_cs.c 
b/drivers/gpu/drm/amd/amdgpu/amdgpu_cs.c
index c25e1ebc76c3..3fd7730612d5 100644
--- a/drivers/gpu/drm/amd/amdgpu/amdgpu_cs.c
+++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_cs.c
@@ -913,7 +913,7 @@ static int amdgpu_cs_vm_handling(struct amdgpu_cs_parser *p)
if (r)
return r;
 
-   r = amdgpu_vm_update_directories(adev, vm);
+   r = amdgpu_vm_update_pdes(adev, vm, false);
if (r)
return r;
 
diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_gem.c 
b/drivers/gpu/drm/amd/amdgpu/amdgpu_gem.c
index ed25a4e14404..579957e3ea67 100644
--- a/drivers/gpu/drm/amd/amdgpu/amdgpu_gem.c
+++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_gem.c
@@ -522,7 +522,7 @@ static void amdgpu_gem_va_update_vm(struct amdgpu_device 
*adev,
goto error;
}
 
-   r = amdgpu_vm_update_directories(adev, vm);
+   r = amdgpu_vm_update_pdes(adev, vm, false);
 
 error:
if (r && r != -ERESTARTSYS)
diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_vm.c 
b/drivers/gpu/drm/amd/amdgpu/amdgpu_vm.c
index 1951f2abbdbc..5bd0408e4e56 100644
--- a/drivers/gpu/drm/amd/amdgpu/amdgpu_vm.c
+++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_vm.c
@@ -1221,18 +1221,19 @@ static void amdgpu_vm_invalidate_pds(struct 
amdgpu_device *adev,
 }
 
 /*
- * amdgpu_vm_update_directories - make sure that all directories are valid
+ * amdgpu_vm_update_ - make sure that all directories are valid
  *
  * @adev: amdgpu_device pointer
  * @vm: requested vm
+ * @direct: submit directly to the paging queue
  *
  * Makes sure all directories are up to date.
  *
  * Returns:
  * 0 for success, error for failure.
  */
-int amdgpu_vm_update_directories(struct amdgpu_device *adev,
-struct amdgpu_vm *vm)
+int amdgpu_vm_update_pdes(struct amdgpu_device *adev,
+ struct amdgpu_vm *vm, bool direct)
 {
struct amdgpu_vm_update_params params;
int r;
@@ -1243,6 +1244,7 @@ int amdgpu_vm_update_directories(struct amdgpu_device 
*adev,
memset(, 0, sizeof(params));
params.adev = adev;
params.vm = vm;
+   params.direct = direct;
 
r = vm->update_funcs->prepare(, AMDGPU_FENCE_OWNER_VM, NULL);
if (r)
diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_vm.h 
b/drivers/gpu/drm/amd/amdgpu/amdgpu_vm.h
index 5941accea061..7227c8da1a2a 100644
--- a/drivers/gpu/drm/amd/amdgpu/amdgpu_vm.h
+++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_vm.h
@@ -361,8 +361,8 @@ int amdgpu_vm_validate_pt_bos(struct amdgpu_device *adev, 
struct amdgpu_vm *vm,
  int (*callback)(void *p, struct amdgpu_bo *bo),
  void *param);
 int amdgpu_vm_flush(struct amdgpu_ring *ring, struct amdgpu_job *job, bool 
need_pipe_sync);
-int amdgpu_vm_update_directories(struct amdgpu_device *adev,
-struct amdgpu_vm *vm);
+int amdgpu_vm_update_pdes(struct amdgpu_device *adev,
+ struct amdgpu_vm *vm, bool direct);
 int amdgpu_vm_clear_freed(struct amdgpu_device *adev,
  struct amdgpu_vm *vm,
  struct dma_fence **fence);
-- 
2.17.1

___
amd-gfx mailing list
amd-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/amd-gfx

[PATCH 5/5] drm/amdgpu: add graceful VM fault handling

2019-06-28 Thread Christian König
Next step towards HMM support. For now just silence the retry fault and
optionally redirect the request to the dummy page.

Signed-off-by: Christian König 
---
 drivers/gpu/drm/amd/amdgpu/amdgpu_vm.c | 59 ++
 drivers/gpu/drm/amd/amdgpu/amdgpu_vm.h |  2 +
 drivers/gpu/drm/amd/amdgpu/gmc_v9_0.c  |  4 ++
 3 files changed, 65 insertions(+)

diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_vm.c 
b/drivers/gpu/drm/amd/amdgpu/amdgpu_vm.c
index a2d18fe46691..6f8c7cc07e8b 100644
--- a/drivers/gpu/drm/amd/amdgpu/amdgpu_vm.c
+++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_vm.c
@@ -3122,3 +3122,62 @@ void amdgpu_vm_set_task_info(struct amdgpu_vm *vm)
}
}
 }
+
+/**
+ * amdgpu_vm_handle_fault - graceful handling of VM faults.
+ * @adev: amdgpu device pointer
+ * @pasid: PASID of the VM
+ * @addr: Address of the fault
+ *
+ * Try to gracefully handle a VM fault. Return true if the fault was handled 
and
+ * shouldn't be reported any more.
+ */
+bool amdgpu_vm_handle_fault(struct amdgpu_device *adev, unsigned int pasid,
+   uint64_t addr)
+{
+   struct amdgpu_ring *ring = >sdma.instance[0].page;
+   uint64_t value, flags;
+   struct amdgpu_vm *vm;
+   long r;
+
+   if (!ring->sched.ready)
+   return false;
+
+   spin_lock(>vm_manager.pasid_lock);
+   vm = idr_find(>vm_manager.pasid_idr, pasid);
+   spin_unlock(>vm_manager.pasid_lock);
+
+   if (!vm)
+   return false;
+
+   r = amdgpu_bo_reserve(vm->root.base.bo, true);
+   if (r)
+   return false;
+
+   addr /= AMDGPU_GPU_PAGE_SIZE;
+   flags = AMDGPU_PTE_VALID | AMDGPU_PTE_SNOOPED |
+   AMDGPU_PTE_SYSTEM;
+
+   if (amdgpu_vm_fault_stop == AMDGPU_VM_FAULT_STOP_NEVER) {
+   /* Redirect the access to the dummy page */
+   value = adev->dummy_page_addr;
+   flags |= AMDGPU_PTE_EXECUTABLE | AMDGPU_PTE_READABLE |
+   AMDGPU_PTE_WRITEABLE;
+   } else {
+   value = 0;
+   }
+
+   r = amdgpu_vm_bo_update_mapping(adev, vm, true, NULL, addr, addr + 1,
+   flags, value, NULL, NULL);
+   if (r)
+   goto error;
+
+   r = amdgpu_vm_update_pdes(adev, vm, true);
+
+error:
+   amdgpu_bo_unreserve(vm->root.base.bo);
+   if (r < 0)
+   DRM_ERROR("Can't handle page fault (%ld)\n", r);
+
+   return false;
+}
diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_vm.h 
b/drivers/gpu/drm/amd/amdgpu/amdgpu_vm.h
index 7227c8da1a2a..d0a30614f70a 100644
--- a/drivers/gpu/drm/amd/amdgpu/amdgpu_vm.h
+++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_vm.h
@@ -408,6 +408,8 @@ void amdgpu_vm_check_compute_bug(struct amdgpu_device 
*adev);
 
 void amdgpu_vm_get_task_info(struct amdgpu_device *adev, unsigned int pasid,
 struct amdgpu_task_info *task_info);
+bool amdgpu_vm_handle_fault(struct amdgpu_device *adev, unsigned int pasid,
+   uint64_t addr);
 
 void amdgpu_vm_set_task_info(struct amdgpu_vm *vm);
 
diff --git a/drivers/gpu/drm/amd/amdgpu/gmc_v9_0.c 
b/drivers/gpu/drm/amd/amdgpu/gmc_v9_0.c
index bd5d36944481..faf6d64cd4d5 100644
--- a/drivers/gpu/drm/amd/amdgpu/gmc_v9_0.c
+++ b/drivers/gpu/drm/amd/amdgpu/gmc_v9_0.c
@@ -324,6 +324,10 @@ static int gmc_v9_0_process_interrupt(struct amdgpu_device 
*adev,
return 1; /* This also prevents sending it to KFD */
 
/* If it's the first fault for this address, process it normally */
+   if (retry_fault && !in_interrupt() &&
+   amdgpu_vm_handle_fault(adev, entry->pasid, addr))
+   return 1; /* This also prevents sending it to KFD */
+
if (!amdgpu_sriov_vf(adev)) {
status = RREG32(hub->vm_l2_pro_fault_status);
WREG32_P(hub->vm_l2_pro_fault_cntl, 1, ~1);
-- 
2.17.1

___
amd-gfx mailing list
amd-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/amd-gfx

[PATCH 3/5] drm/amdgpu: allow direct submission of PTE updates

2019-06-28 Thread Christian König
For handling PTE updates directly in the fault handler.

Signed-off-by: Christian König 
---
 drivers/gpu/drm/amd/amdgpu/amdgpu_vm.c | 18 ++
 1 file changed, 10 insertions(+), 8 deletions(-)

diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_vm.c 
b/drivers/gpu/drm/amd/amdgpu/amdgpu_vm.c
index 5bd0408e4e56..a8ea7bf18cf6 100644
--- a/drivers/gpu/drm/amd/amdgpu/amdgpu_vm.c
+++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_vm.c
@@ -1486,13 +1486,14 @@ static int amdgpu_vm_update_ptes(struct 
amdgpu_vm_update_params *params,
  * amdgpu_vm_bo_update_mapping - update a mapping in the vm page table
  *
  * @adev: amdgpu_device pointer
- * @exclusive: fence we need to sync to
- * @pages_addr: DMA addresses to use for mapping
  * @vm: requested vm
+ * @direct: direct submission in a page fault
+ * @exclusive: fence we need to sync to
  * @start: start of mapped range
  * @last: last mapped entry
  * @flags: flags for the entries
  * @addr: addr to set the area to
+ * @pages_addr: DMA addresses to use for mapping
  * @fence: optional resulting fence
  *
  * Fill in the page table entries between @start and @last.
@@ -1501,11 +1502,11 @@ static int amdgpu_vm_update_ptes(struct 
amdgpu_vm_update_params *params,
  * 0 for success, -EINVAL for failure.
  */
 static int amdgpu_vm_bo_update_mapping(struct amdgpu_device *adev,
+  struct amdgpu_vm *vm, bool direct,
   struct dma_fence *exclusive,
-  dma_addr_t *pages_addr,
-  struct amdgpu_vm *vm,
   uint64_t start, uint64_t last,
   uint64_t flags, uint64_t addr,
+  dma_addr_t *pages_addr,
   struct dma_fence **fence)
 {
struct amdgpu_vm_update_params params;
@@ -1515,6 +1516,7 @@ static int amdgpu_vm_bo_update_mapping(struct 
amdgpu_device *adev,
memset(, 0, sizeof(params));
params.adev = adev;
params.vm = vm;
+   params.direct = direct;
params.pages_addr = pages_addr;
 
/* sync to everything except eviction fences on unmapping */
@@ -1646,9 +1648,9 @@ static int amdgpu_vm_bo_split_mapping(struct 
amdgpu_device *adev,
}
 
last = min((uint64_t)mapping->last, start + max_entries - 1);
-   r = amdgpu_vm_bo_update_mapping(adev, exclusive, dma_addr, vm,
+   r = amdgpu_vm_bo_update_mapping(adev, vm, false, exclusive,
start, last, flags, addr,
-   fence);
+   dma_addr, fence);
if (r)
return r;
 
@@ -1942,9 +1944,9 @@ int amdgpu_vm_clear_freed(struct amdgpu_device *adev,
mapping->start < AMDGPU_GMC_HOLE_START)
init_pte_value = AMDGPU_PTE_DEFAULT_ATC;
 
-   r = amdgpu_vm_bo_update_mapping(adev, NULL, NULL, vm,
+   r = amdgpu_vm_bo_update_mapping(adev, vm, false, NULL,
mapping->start, mapping->last,
-   init_pte_value, 0, );
+   init_pte_value, 0, NULL, );
amdgpu_vm_free_mapping(adev, vm, mapping, f);
if (r) {
dma_fence_put(f);
-- 
2.17.1

___
amd-gfx mailing list
amd-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/amd-gfx

[PATCH 1/5] drm/amdgpu: allow direct submission in the VM backends

2019-06-28 Thread Christian König
This allows us to update page tables directly while in a page fault.

Signed-off-by: Christian König 
---
 drivers/gpu/drm/amd/amdgpu/amdgpu_vm.h  |  5 
 drivers/gpu/drm/amd/amdgpu/amdgpu_vm_cpu.c  |  4 +++
 drivers/gpu/drm/amd/amdgpu/amdgpu_vm_sdma.c | 29 +
 3 files changed, 27 insertions(+), 11 deletions(-)

diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_vm.h 
b/drivers/gpu/drm/amd/amdgpu/amdgpu_vm.h
index 489a162ca620..5941accea061 100644
--- a/drivers/gpu/drm/amd/amdgpu/amdgpu_vm.h
+++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_vm.h
@@ -197,6 +197,11 @@ struct amdgpu_vm_update_params {
 */
struct amdgpu_vm *vm;
 
+   /**
+* @direct: if changes should be made directly
+*/
+   bool direct;
+
/**
 * @pages_addr:
 *
diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_vm_cpu.c 
b/drivers/gpu/drm/amd/amdgpu/amdgpu_vm_cpu.c
index 5222d165abfc..f94e4896079c 100644
--- a/drivers/gpu/drm/amd/amdgpu/amdgpu_vm_cpu.c
+++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_vm_cpu.c
@@ -49,6 +49,10 @@ static int amdgpu_vm_cpu_prepare(struct 
amdgpu_vm_update_params *p, void *owner,
 {
int r;
 
+   /* Don't wait for anything during page fault */
+   if (p->direct)
+   return 0;
+
/* Wait for PT BOs to be idle. PTs share the same resv. object
 * as the root PD BO
 */
diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_vm_sdma.c 
b/drivers/gpu/drm/amd/amdgpu/amdgpu_vm_sdma.c
index ddd181f5ed37..891d597063cb 100644
--- a/drivers/gpu/drm/amd/amdgpu/amdgpu_vm_sdma.c
+++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_vm_sdma.c
@@ -68,17 +68,17 @@ static int amdgpu_vm_sdma_prepare(struct 
amdgpu_vm_update_params *p,
if (r)
return r;
 
-   r = amdgpu_sync_fence(p->adev, >job->sync, exclusive, false);
-   if (r)
-   return r;
+   p->num_dw_left = ndw;
+
+   if (p->direct)
+   return 0;
 
-   r = amdgpu_sync_resv(p->adev, >job->sync, root->tbo.resv,
-owner, false);
+   r = amdgpu_sync_fence(p->adev, >job->sync, exclusive, false);
if (r)
return r;
 
-   p->num_dw_left = ndw;
-   return 0;
+   return amdgpu_sync_resv(p->adev, >job->sync, root->tbo.resv,
+   owner, false);
 }
 
 /**
@@ -99,13 +99,21 @@ static int amdgpu_vm_sdma_commit(struct 
amdgpu_vm_update_params *p,
struct dma_fence *f;
int r;
 
-   ring = container_of(p->vm->entity.rq->sched, struct amdgpu_ring, sched);
+   if (p->direct)
+   ring = p->adev->vm_manager.page_fault;
+   else
+   ring = container_of(p->vm->entity.rq->sched,
+   struct amdgpu_ring, sched);
 
WARN_ON(ib->length_dw == 0);
amdgpu_ring_pad_ib(ring, ib);
WARN_ON(ib->length_dw > p->num_dw_left);
-   r = amdgpu_job_submit(p->job, >vm->entity,
- AMDGPU_FENCE_OWNER_VM, );
+
+   if (p->direct)
+   r = amdgpu_job_submit_direct(p->job, ring, );
+   else
+   r = amdgpu_job_submit(p->job, >vm->entity,
+ AMDGPU_FENCE_OWNER_VM, );
if (r)
goto error;
 
@@ -120,7 +128,6 @@ static int amdgpu_vm_sdma_commit(struct 
amdgpu_vm_update_params *p,
return r;
 }
 
-
 /**
  * amdgpu_vm_sdma_copy_ptes - copy the PTEs from mapping
  *
-- 
2.17.1

___
amd-gfx mailing list
amd-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/amd-gfx

[PATCH 4/5] drm/amdgpu: allow direct submission of clears

2019-06-28 Thread Christian König
For handling PD/PT clears directly in the fault handler.

Signed-off-by: Christian König 
---
 drivers/gpu/drm/amd/amdgpu/amdgpu_vm.c | 17 +++--
 1 file changed, 11 insertions(+), 6 deletions(-)

diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_vm.c 
b/drivers/gpu/drm/amd/amdgpu/amdgpu_vm.c
index a8ea7bf18cf6..a2d18fe46691 100644
--- a/drivers/gpu/drm/amd/amdgpu/amdgpu_vm.c
+++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_vm.c
@@ -695,6 +695,7 @@ bool amdgpu_vm_ready(struct amdgpu_vm *vm)
  * @adev: amdgpu_device pointer
  * @vm: VM to clear BO from
  * @bo: BO to clear
+ * @direct: use a direct update
  *
  * Root PD needs to be reserved when calling this.
  *
@@ -703,7 +704,8 @@ bool amdgpu_vm_ready(struct amdgpu_vm *vm)
  */
 static int amdgpu_vm_clear_bo(struct amdgpu_device *adev,
  struct amdgpu_vm *vm,
- struct amdgpu_bo *bo)
+ struct amdgpu_bo *bo,
+ bool direct)
 {
struct ttm_operation_ctx ctx = { true, false };
unsigned level = adev->vm_manager.root_level;
@@ -762,6 +764,7 @@ static int amdgpu_vm_clear_bo(struct amdgpu_device *adev,
memset(, 0, sizeof(params));
params.adev = adev;
params.vm = vm;
+   params.direct = direct;
 
r = vm->update_funcs->prepare(, AMDGPU_FENCE_OWNER_KFD, NULL);
if (r)
@@ -852,7 +855,8 @@ static void amdgpu_vm_bo_param(struct amdgpu_device *adev, 
struct amdgpu_vm *vm,
  */
 static int amdgpu_vm_alloc_pts(struct amdgpu_device *adev,
   struct amdgpu_vm *vm,
-  struct amdgpu_vm_pt_cursor *cursor)
+  struct amdgpu_vm_pt_cursor *cursor,
+  bool direct)
 {
struct amdgpu_vm_pt *entry = cursor->entry;
struct amdgpu_bo_param bp;
@@ -885,7 +889,7 @@ static int amdgpu_vm_alloc_pts(struct amdgpu_device *adev,
pt->parent = amdgpu_bo_ref(cursor->parent->base.bo);
amdgpu_vm_bo_base_init(>base, vm, pt);
 
-   r = amdgpu_vm_clear_bo(adev, vm, pt);
+   r = amdgpu_vm_clear_bo(adev, vm, pt, direct);
if (r)
goto error_free_pt;
 
@@ -1395,7 +1399,8 @@ static int amdgpu_vm_update_ptes(struct 
amdgpu_vm_update_params *params,
uint64_t incr, entry_end, pe_start;
struct amdgpu_bo *pt;
 
-   r = amdgpu_vm_alloc_pts(params->adev, params->vm, );
+   r = amdgpu_vm_alloc_pts(params->adev, params->vm, ,
+   params->direct);
if (r)
return r;
 
@@ -2733,7 +2738,7 @@ int amdgpu_vm_init(struct amdgpu_device *adev, struct 
amdgpu_vm *vm,
 
amdgpu_vm_bo_base_init(>root.base, vm, root);
 
-   r = amdgpu_vm_clear_bo(adev, vm, root);
+   r = amdgpu_vm_clear_bo(adev, vm, root, false);
if (r)
goto error_unreserve;
 
@@ -2853,7 +2858,7 @@ int amdgpu_vm_make_compute(struct amdgpu_device *adev, 
struct amdgpu_vm *vm, uns
 */
if (pte_support_ats != vm->pte_support_ats) {
vm->pte_support_ats = pte_support_ats;
-   r = amdgpu_vm_clear_bo(adev, vm, vm->root.base.bo);
+   r = amdgpu_vm_clear_bo(adev, vm, vm->root.base.bo, false);
if (r)
goto free_idr;
}
-- 
2.17.1

___
amd-gfx mailing list
amd-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/amd-gfx

Re: [PATCH v3] drm/amdgpu: fix scheduler timeout calc

2019-06-28 Thread Christian König

Am 28.06.19 um 11:23 schrieb Cui, Flora:

scheduler timeout is in jiffies
v2: move timeout check to amdgpu_device_get_job_timeout_settings after
parsing the value
v3: add lockup_timeout param check. 0: keep default value. negative:
infinity timeout.

Change-Id: I26708c163db943ff8d930dd81bcab4b4b9d84eb2
Signed-off-by: Flora Cui 
---
  drivers/gpu/drm/amd/amdgpu/amdgpu_drv.c | 14 ++
  1 file changed, 10 insertions(+), 4 deletions(-)

diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_drv.c 
b/drivers/gpu/drm/amd/amdgpu/amdgpu_drv.c
index e74a175..0d667fa 100644
--- a/drivers/gpu/drm/amd/amdgpu/amdgpu_drv.c
+++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_drv.c
@@ -245,7 +245,8 @@ module_param_named(msi, amdgpu_msi, int, 0444);
   * By default(with no lockup_timeout settings), the timeout for all 
non-compute(GFX, SDMA and Video)
   * jobs is 1. And there is no timeout enforced on compute jobs.
   */
-MODULE_PARM_DESC(lockup_timeout, "GPU lockup timeout in ms (default: 1 for 
non-compute jobs and no timeout for compute jobs), "
+MODULE_PARM_DESC(lockup_timeout, "GPU lockup timeout in ms (default: 1 for 
non-compute jobs and infinity timeout for compute jobs."
+   " 0: keep default value. negative: infinity timeout), "
"format is [Non-Compute] or [GFX,Compute,SDMA,Video]");
  module_param_string(lockup_timeout, amdgpu_lockup_timeout, 
sizeof(amdgpu_lockup_timeout), 0444);
  
@@ -1300,7 +1301,9 @@ int amdgpu_device_get_job_timeout_settings(struct amdgpu_device *adev)

 * By default timeout for non compute jobs is 1.
 * And there is no timeout enforced on compute jobs.
 */
-   adev->gfx_timeout = adev->sdma_timeout = adev->video_timeout = 1;
+   adev->gfx_timeout =
+   adev->sdma_timeout =
+   adev->video_timeout = msecs_to_jiffies(1);


I would write this as:

adev->gfx_timeout = msecs_to_jiffies(1);
adev->sdma_timeout = adev->video_timeout = adev->gfx_timeout;

Looks better than splitting this on multiple lines without any indentation.

Apart from that looks good to me,
Christian.


adev->compute_timeout = MAX_SCHEDULE_TIMEOUT;
  
  	if (strnlen(input, AMDGPU_MAX_TIMEOUT_PARAM_LENTH)) {

@@ -1310,10 +1313,13 @@ int amdgpu_device_get_job_timeout_settings(struct 
amdgpu_device *adev)
if (ret)
return ret;
  
-			/* Invalidate 0 and negative values */

-   if (timeout <= 0) {
+   if (timeout == 0) {
index++;
continue;
+   } else if (timeout < 0) {
+   timeout = MAX_SCHEDULE_TIMEOUT;
+   } else {
+   timeout = msecs_to_jiffies(timeout);
}
  
  			switch (index++) {


___
amd-gfx mailing list
amd-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/amd-gfx

Re: [PATCH 2/2] dma-buf: cleanup shared fence removal

2019-06-28 Thread Koenig, Christian
Am 28.06.19 um 11:41 schrieb Daniel Vetter:
> On Fri, Jun 28, 2019 at 10:40 AM Christian König
>  wrote:
>> Am 28.06.19 um 09:30 schrieb Daniel Vetter:
>>> On Fri, Jun 28, 2019 at 8:32 AM Koenig, Christian
>>>  wrote:
 Am 27.06.19 um 21:57 schrieb Daniel Vetter:
> [SNIP]
>>> Well yeah you have to wait for outstanding rendering. Everyone does
>>> that. The problem is that ->eviction_fence does not represent
>>> rendering, it's a very contrived way to implement the ->notify_move
>>> callback.
>>>
>>> For real fences you have to wait for rendering to complete, putting
>>> aside corner cases like destroying an entire context with all its
>>> pending rendering. Not really something we should optimize for I
>>> think.
>> No, real fences I don't need to wait for the rendering to complete either.
>>
>> If userspace said that this per process resource is dead and can be
>> removed, we don't need to wait for it to become idle.
> But why did you attach a fence in the first place?

Because so that others can still wait for it.

See it is perfectly valid to export this buffer object to let's say the 
Intel driver and in this case I don't get a move notification.

And I really don't want to add another workaround where I add the fences 
only when the BO is exported or stuff like that.

>> [SNIP]
>> As far as I can see it is perfectly valid to remove all fences from this
>> process as soon as the page tables are up to date.
> I'm not objecting the design, but just the implementation.

Ok, then we can at least agree on that.

>>> So with the magic amdkfd_eviction fence I agree this makes sense. The
>>> problem I'm having here is that the magic eviction fence itself
>>> doesn't make sense. What I expect will happen (in terms of the new
>>> dynamic dma-buf stuff, I don't have the ttm-ism ready to explain it in
>>> those concepts).
>>>
>>> - when you submit command buffers, you _dont_ attach fences to all
>>> involved buffers
>> That's not going to work because then the memory management then thinks
>> that the buffer is immediately movable, which it isn't,
> I guess we need to fix that then. I pretty much assumed that
> ->notify_move could add whatever fences you might want to add. Which
> would very neatly allow us to solve this problem here, instead of
> coming up with fake fences and fun stuff like that.

Adding the fence later on is not a solution because we need something 
which beforehand can check if a buffer is movable or not.

In the case of a move_notify the decision to move it is already done and 
you can't say oh sorry I have to evict my process and reprogram the 
hardware or whatever.

Especially when you do this in an OOM situation.

> If ->notify_move can't add fences, then the you have to attach the
> right fences to all of the bo, all the time. And a CS model where
> userspace just updates the working set and keeps submitting stuff,
> while submitting new batches. And the kernel preempts the entire
> context if memory needs to be evicted, and re-runs it only once the
> working set is available again.
>
> No the eviction_fence is not a good design solution for this, and imo
> you should have rejected that.

Actually I was the one who suggested that because the alternatives 
doesn't sounded like they would work.

[SNIP]
>> See ttm_bo_individualize_resv() as well, here we do something similar
>> for GFX what the KFD does when it releases memory.
> Uh wut. I guess more funky tricks I need to first learn about, but
> yeah doesn't make much sense ot me right now.
>
>> E.g. for per process resources we copy over the current fences into an
>> individualized reservation object, to make sure that this can be freed
>> up at some time in the future.
> Why are you deleteing an object where others outside of your driver
> can still add fences? Just from this description this doesn't make
> sense to me ...

Multiple BOs share a single reservation object. This is used for example 
for page tables.

To map 16GB or memory I can easily have more than 8k BOs for the page 
tables.

So what we did was to use reservation object of the root page table for 
all other BOs of the VM as well.

Otherwise you would need to add a fence to 8k reservation objects and 
that is not really feasible.

That works fine, except for the case when you want to free up a page 
table. In this case we individualize the BO, copy the fences over and 
say ok we free that up when the current set of fences is signaled.

>> But I really want to go a step further and say ok, all fences from this
>> process can be removed after we updated the page tables.
>>
>>> No where do you need to remove a fence, because you never attached a
>>> bogus fence.
>>>
>>> Now with the magic eviction trick amdkfd uses, you can't do that,
>>> because you need to attach this magic fence to all buffers, all the
>>> time. And since it still needs to work like a fence it's one-shot
>>> only, i.e. instead of a reuseable ->notify_move callback you need to
>>> create a new 

Re: [PATCH 2/2] dma-buf: cleanup shared fence removal

2019-06-28 Thread Daniel Vetter
On Fri, Jun 28, 2019 at 10:40 AM Christian König
 wrote:
>
> Am 28.06.19 um 09:30 schrieb Daniel Vetter:
> > On Fri, Jun 28, 2019 at 8:32 AM Koenig, Christian
> >  wrote:
> >> Am 27.06.19 um 21:57 schrieb Daniel Vetter:
> >>> [SNIP]
>  Again, the reason to remove the fence from one reservation object is
>  simply that it is faster to remove it from one object than to attach a
>  new fence to all other objects.
> >>> Hm I guess I was lead astray by the eviction_fence invalidation thing
> >>> in enable_signaling, and a few other places that freed the bo right
> >>> afters (like amdgpu_amdkfd_gpuvm_free_memory_of_gpu), where removing
> >>> the fences first and then freeing the bo is kinda pointless.
> >> AH! Now I know where your missing puzzle piece is.
> >>
> >> See when we free a buffer object TTM puts it on the delayed delete list
> >> to make sure that it gets freed up only after all fences are signaled.
> >>
> >> So this is essentially to make sure the BO gets freed up immediately.
> > Well yeah you have to wait for outstanding rendering. Everyone does
> > that. The problem is that ->eviction_fence does not represent
> > rendering, it's a very contrived way to implement the ->notify_move
> > callback.
> >
> > For real fences you have to wait for rendering to complete, putting
> > aside corner cases like destroying an entire context with all its
> > pending rendering. Not really something we should optimize for I
> > think.
>
> No, real fences I don't need to wait for the rendering to complete either.
>
> If userspace said that this per process resource is dead and can be
> removed, we don't need to wait for it to become idle.

But why did you attach a fence in the first place?

> >>> Now with your insistence that I'm getting something wrong I guess the
> >>> you're talking about the unbind case, where the bo survives, but it's
> >>> mapping disappears, and hence that specific eviction_fence needs to be
> >>> removed.
> >>> And yeah there I guess just removing the magic eviction fence is
> >>> cheaper than replacing all the others.
> >> If possible I actually want to apply this to the general case of freeing
> >> up per process resources.
> >>
> >> In other words when we don't track resource usage on a per submission
> >> basis freeing up resources is costly because we always have to wait for
> >> the last submission.
> >>
> >> But if we can prevent further access to the resource using page tables
> >> it is perfectly valid to free it as soon as the page tables are up to date.
> > Still not seeing how you can use this outside of the magic amdkfd
> > eviction_fence.
>
> As I explained you have a per process resource and userspace says that
> this one can go away.
>
> As far as I can see it is perfectly valid to remove all fences from this
> process as soon as the page tables are up to date.

I'm not objecting the design, but just the implementation.

> > So with the magic amdkfd_eviction fence I agree this makes sense. The
> > problem I'm having here is that the magic eviction fence itself
> > doesn't make sense. What I expect will happen (in terms of the new
> > dynamic dma-buf stuff, I don't have the ttm-ism ready to explain it in
> > those concepts).
> >
> > - when you submit command buffers, you _dont_ attach fences to all
> > involved buffers
>
> That's not going to work because then the memory management then thinks
> that the buffer is immediately movable, which it isn't,

I guess we need to fix that then. I pretty much assumed that
->notify_move could add whatever fences you might want to add. Which
would very neatly allow us to solve this problem here, instead of
coming up with fake fences and fun stuff like that.

If ->notify_move can't add fences, then the you have to attach the
right fences to all of the bo, all the time. And a CS model where
userspace just updates the working set and keeps submitting stuff,
while submitting new batches. And the kernel preempts the entire
context if memory needs to be evicted, and re-runs it only once the
working set is available again.

No the eviction_fence is not a good design solution for this, and imo
you should have rejected that. Who cares if the amdkfd people didn't
want to put in the work. But that ship sailed, so lets at least not
spread this more.

> > - when you get a ->notify_move there's currently 2 options:
> > 1. inefficient way: wait for the latest command buffer to complete, as
> > a defensive move. To do that you attach the fence from that command
> > buffer to the obj in your notifiy_move callback (the kerneldoc doesn't
> > explain this, but I think we really need this).
> > 2. efficient way: You just unmap from pagetables (and eat/handle the
> > fault if there is any).
>
> Exactly yeah. As far as I can see for freeing things up that is a
> perfectly valid approach as long as you have a VM which prevents
> accessing this memory.
>
> See ttm_bo_individualize_resv() as well, here we do something similar
> for GFX what the KFD 

[PATCH v3] drm/amdgpu: fix scheduler timeout calc

2019-06-28 Thread Cui, Flora
scheduler timeout is in jiffies
v2: move timeout check to amdgpu_device_get_job_timeout_settings after
parsing the value
v3: add lockup_timeout param check. 0: keep default value. negative:
infinity timeout.

Change-Id: I26708c163db943ff8d930dd81bcab4b4b9d84eb2
Signed-off-by: Flora Cui 
---
 drivers/gpu/drm/amd/amdgpu/amdgpu_drv.c | 14 ++
 1 file changed, 10 insertions(+), 4 deletions(-)

diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_drv.c 
b/drivers/gpu/drm/amd/amdgpu/amdgpu_drv.c
index e74a175..0d667fa 100644
--- a/drivers/gpu/drm/amd/amdgpu/amdgpu_drv.c
+++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_drv.c
@@ -245,7 +245,8 @@ module_param_named(msi, amdgpu_msi, int, 0444);
  * By default(with no lockup_timeout settings), the timeout for all 
non-compute(GFX, SDMA and Video)
  * jobs is 1. And there is no timeout enforced on compute jobs.
  */
-MODULE_PARM_DESC(lockup_timeout, "GPU lockup timeout in ms (default: 1 for 
non-compute jobs and no timeout for compute jobs), "
+MODULE_PARM_DESC(lockup_timeout, "GPU lockup timeout in ms (default: 1 for 
non-compute jobs and infinity timeout for compute jobs."
+   " 0: keep default value. negative: infinity timeout), "
"format is [Non-Compute] or [GFX,Compute,SDMA,Video]");
 module_param_string(lockup_timeout, amdgpu_lockup_timeout, 
sizeof(amdgpu_lockup_timeout), 0444);
 
@@ -1300,7 +1301,9 @@ int amdgpu_device_get_job_timeout_settings(struct 
amdgpu_device *adev)
 * By default timeout for non compute jobs is 1.
 * And there is no timeout enforced on compute jobs.
 */
-   adev->gfx_timeout = adev->sdma_timeout = adev->video_timeout = 1;
+   adev->gfx_timeout =
+   adev->sdma_timeout =
+   adev->video_timeout = msecs_to_jiffies(1);
adev->compute_timeout = MAX_SCHEDULE_TIMEOUT;
 
if (strnlen(input, AMDGPU_MAX_TIMEOUT_PARAM_LENTH)) {
@@ -1310,10 +1313,13 @@ int amdgpu_device_get_job_timeout_settings(struct 
amdgpu_device *adev)
if (ret)
return ret;
 
-   /* Invalidate 0 and negative values */
-   if (timeout <= 0) {
+   if (timeout == 0) {
index++;
continue;
+   } else if (timeout < 0) {
+   timeout = MAX_SCHEDULE_TIMEOUT;
+   } else {
+   timeout = msecs_to_jiffies(timeout);
}
 
switch (index++) {
-- 
2.7.4

___
amd-gfx mailing list
amd-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/amd-gfx

[PATCH v3 08/18] drm/ttm: use gem vma_node

2019-06-28 Thread Gerd Hoffmann
Drop vma_node from ttm_buffer_object, use the gem struct
(base.vma_node) instead.

Signed-off-by: Gerd Hoffmann 
Reviewed-by: Christian König 
---
 drivers/gpu/drm/amd/amdgpu/amdgpu_object.h | 2 +-
 drivers/gpu/drm/qxl/qxl_object.h   | 2 +-
 drivers/gpu/drm/radeon/radeon_object.h | 2 +-
 drivers/gpu/drm/virtio/virtgpu_drv.h   | 2 +-
 include/drm/ttm/ttm_bo_api.h   | 4 
 drivers/gpu/drm/drm_gem_vram_helper.c  | 5 +
 drivers/gpu/drm/nouveau/nouveau_display.c  | 2 +-
 drivers/gpu/drm/nouveau/nouveau_gem.c  | 2 +-
 drivers/gpu/drm/ttm/ttm_bo.c   | 8 
 drivers/gpu/drm/ttm/ttm_bo_util.c  | 2 +-
 drivers/gpu/drm/ttm/ttm_bo_vm.c| 9 +
 drivers/gpu/drm/virtio/virtgpu_prime.c | 3 ---
 drivers/gpu/drm/vmwgfx/vmwgfx_bo.c | 4 ++--
 drivers/gpu/drm/vmwgfx/vmwgfx_surface.c| 4 ++--
 14 files changed, 21 insertions(+), 30 deletions(-)

diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_object.h 
b/drivers/gpu/drm/amd/amdgpu/amdgpu_object.h
index a80a9972ad16..a68d85bd8fab 100644
--- a/drivers/gpu/drm/amd/amdgpu/amdgpu_object.h
+++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_object.h
@@ -191,7 +191,7 @@ static inline unsigned amdgpu_bo_gpu_page_alignment(struct 
amdgpu_bo *bo)
  */
 static inline u64 amdgpu_bo_mmap_offset(struct amdgpu_bo *bo)
 {
-   return drm_vma_node_offset_addr(>tbo.vma_node);
+   return drm_vma_node_offset_addr(>tbo.base.vma_node);
 }
 
 /**
diff --git a/drivers/gpu/drm/qxl/qxl_object.h b/drivers/gpu/drm/qxl/qxl_object.h
index b812d4ae9d0d..8ae54ba7857c 100644
--- a/drivers/gpu/drm/qxl/qxl_object.h
+++ b/drivers/gpu/drm/qxl/qxl_object.h
@@ -60,7 +60,7 @@ static inline unsigned long qxl_bo_size(struct qxl_bo *bo)
 
 static inline u64 qxl_bo_mmap_offset(struct qxl_bo *bo)
 {
-   return drm_vma_node_offset_addr(>tbo.vma_node);
+   return drm_vma_node_offset_addr(>tbo.base.vma_node);
 }
 
 static inline int qxl_bo_wait(struct qxl_bo *bo, u32 *mem_type,
diff --git a/drivers/gpu/drm/radeon/radeon_object.h 
b/drivers/gpu/drm/radeon/radeon_object.h
index 9ffd8215d38a..e5554bf9140e 100644
--- a/drivers/gpu/drm/radeon/radeon_object.h
+++ b/drivers/gpu/drm/radeon/radeon_object.h
@@ -116,7 +116,7 @@ static inline unsigned radeon_bo_gpu_page_alignment(struct 
radeon_bo *bo)
  */
 static inline u64 radeon_bo_mmap_offset(struct radeon_bo *bo)
 {
-   return drm_vma_node_offset_addr(>tbo.vma_node);
+   return drm_vma_node_offset_addr(>tbo.base.vma_node);
 }
 
 extern int radeon_bo_wait(struct radeon_bo *bo, u32 *mem_type,
diff --git a/drivers/gpu/drm/virtio/virtgpu_drv.h 
b/drivers/gpu/drm/virtio/virtgpu_drv.h
index 9e2d3062b01d..7146ba00fd5b 100644
--- a/drivers/gpu/drm/virtio/virtgpu_drv.h
+++ b/drivers/gpu/drm/virtio/virtgpu_drv.h
@@ -396,7 +396,7 @@ static inline void virtio_gpu_object_unref(struct 
virtio_gpu_object **bo)
 
 static inline u64 virtio_gpu_object_mmap_offset(struct virtio_gpu_object *bo)
 {
-   return drm_vma_node_offset_addr(>tbo.vma_node);
+   return drm_vma_node_offset_addr(>tbo.base.vma_node);
 }
 
 static inline int virtio_gpu_object_reserve(struct virtio_gpu_object *bo,
diff --git a/include/drm/ttm/ttm_bo_api.h b/include/drm/ttm/ttm_bo_api.h
index fa050f0328ab..7ffc50a3303d 100644
--- a/include/drm/ttm/ttm_bo_api.h
+++ b/include/drm/ttm/ttm_bo_api.h
@@ -152,7 +152,6 @@ struct ttm_tt;
  * @ddestroy: List head for the delayed destroy list.
  * @swap: List head for swap LRU list.
  * @moving: Fence set when BO is moving
- * @vma_node: Address space manager node.
  * @offset: The current GPU offset, which can have different meanings
  * depending on the memory type. For SYSTEM type memory, it should be 0.
  * @cur_placement: Hint of current placement.
@@ -219,9 +218,6 @@ struct ttm_buffer_object {
 */
 
struct dma_fence *moving;
-
-   struct drm_vma_offset_node vma_node;
-
unsigned priority;
 
/**
diff --git a/drivers/gpu/drm/drm_gem_vram_helper.c 
b/drivers/gpu/drm/drm_gem_vram_helper.c
index 61d9520cc15f..2e474dee30df 100644
--- a/drivers/gpu/drm/drm_gem_vram_helper.c
+++ b/drivers/gpu/drm/drm_gem_vram_helper.c
@@ -163,7 +163,7 @@ EXPORT_SYMBOL(drm_gem_vram_put);
  */
 u64 drm_gem_vram_mmap_offset(struct drm_gem_vram_object *gbo)
 {
-   return drm_vma_node_offset_addr(>bo.vma_node);
+   return drm_vma_node_offset_addr(>bo.base.vma_node);
 }
 EXPORT_SYMBOL(drm_gem_vram_mmap_offset);
 
@@ -633,9 +633,6 @@ EXPORT_SYMBOL(drm_gem_vram_driver_gem_prime_vunmap);
 int drm_gem_vram_driver_gem_prime_mmap(struct drm_gem_object *gem,
   struct vm_area_struct *vma)
 {
-   struct drm_gem_vram_object *gbo = drm_gem_vram_of_gem(gem);
-
-   gbo->bo.base.vma_node.vm_node.start = gbo->bo.vma_node.vm_node.start;
return drm_gem_prime_mmap(gem, vma);
 }
 EXPORT_SYMBOL(drm_gem_vram_driver_gem_prime_mmap);
diff --git a/drivers/gpu/drm/nouveau/nouveau_display.c 

[PATCH v3 12/18] drm/radeon: switch driver from bo->resv to bo->base.resv

2019-06-28 Thread Gerd Hoffmann
Signed-off-by: Gerd Hoffmann 
Reviewed-by: Christian König 
---
 drivers/gpu/drm/radeon/radeon_benchmark.c | 4 ++--
 drivers/gpu/drm/radeon/radeon_cs.c| 2 +-
 drivers/gpu/drm/radeon/radeon_display.c   | 2 +-
 drivers/gpu/drm/radeon/radeon_gem.c   | 6 +++---
 drivers/gpu/drm/radeon/radeon_mn.c| 2 +-
 drivers/gpu/drm/radeon/radeon_object.c| 9 -
 drivers/gpu/drm/radeon/radeon_test.c  | 8 
 drivers/gpu/drm/radeon/radeon_ttm.c   | 2 +-
 drivers/gpu/drm/radeon/radeon_uvd.c   | 2 +-
 drivers/gpu/drm/radeon/radeon_vm.c| 6 +++---
 10 files changed, 21 insertions(+), 22 deletions(-)

diff --git a/drivers/gpu/drm/radeon/radeon_benchmark.c 
b/drivers/gpu/drm/radeon/radeon_benchmark.c
index 7ce5064a59f6..1ea50ce16312 100644
--- a/drivers/gpu/drm/radeon/radeon_benchmark.c
+++ b/drivers/gpu/drm/radeon/radeon_benchmark.c
@@ -122,7 +122,7 @@ static void radeon_benchmark_move(struct radeon_device 
*rdev, unsigned size,
if (rdev->asic->copy.dma) {
time = radeon_benchmark_do_move(rdev, size, saddr, daddr,
RADEON_BENCHMARK_COPY_DMA, n,
-   dobj->tbo.resv);
+   dobj->tbo.base.resv);
if (time < 0)
goto out_cleanup;
if (time > 0)
@@ -133,7 +133,7 @@ static void radeon_benchmark_move(struct radeon_device 
*rdev, unsigned size,
if (rdev->asic->copy.blit) {
time = radeon_benchmark_do_move(rdev, size, saddr, daddr,
RADEON_BENCHMARK_COPY_BLIT, n,
-   dobj->tbo.resv);
+   dobj->tbo.base.resv);
if (time < 0)
goto out_cleanup;
if (time > 0)
diff --git a/drivers/gpu/drm/radeon/radeon_cs.c 
b/drivers/gpu/drm/radeon/radeon_cs.c
index d206654b31ad..7e5254a34e84 100644
--- a/drivers/gpu/drm/radeon/radeon_cs.c
+++ b/drivers/gpu/drm/radeon/radeon_cs.c
@@ -257,7 +257,7 @@ static int radeon_cs_sync_rings(struct radeon_cs_parser *p)
list_for_each_entry(reloc, >validated, tv.head) {
struct reservation_object *resv;
 
-   resv = reloc->robj->tbo.resv;
+   resv = reloc->robj->tbo.base.resv;
r = radeon_sync_resv(p->rdev, >ib.sync, resv,
 reloc->tv.num_shared);
if (r)
diff --git a/drivers/gpu/drm/radeon/radeon_display.c 
b/drivers/gpu/drm/radeon/radeon_display.c
index ea6b752dd3a4..7bf73230ac0b 100644
--- a/drivers/gpu/drm/radeon/radeon_display.c
+++ b/drivers/gpu/drm/radeon/radeon_display.c
@@ -533,7 +533,7 @@ static int radeon_crtc_page_flip_target(struct drm_crtc 
*crtc,
DRM_ERROR("failed to pin new rbo buffer before flip\n");
goto cleanup;
}
-   work->fence = 
dma_fence_get(reservation_object_get_excl(new_rbo->tbo.resv));
+   work->fence = 
dma_fence_get(reservation_object_get_excl(new_rbo->tbo.base.resv));
radeon_bo_get_tiling_flags(new_rbo, _flags, NULL);
radeon_bo_unreserve(new_rbo);
 
diff --git a/drivers/gpu/drm/radeon/radeon_gem.c 
b/drivers/gpu/drm/radeon/radeon_gem.c
index 7238007f5aa4..03873f21a734 100644
--- a/drivers/gpu/drm/radeon/radeon_gem.c
+++ b/drivers/gpu/drm/radeon/radeon_gem.c
@@ -114,7 +114,7 @@ static int radeon_gem_set_domain(struct drm_gem_object 
*gobj,
}
if (domain == RADEON_GEM_DOMAIN_CPU) {
/* Asking for cpu access wait for object idle */
-   r = reservation_object_wait_timeout_rcu(robj->tbo.resv, true, 
true, 30 * HZ);
+   r = reservation_object_wait_timeout_rcu(robj->tbo.base.resv, 
true, true, 30 * HZ);
if (!r)
r = -EBUSY;
 
@@ -449,7 +449,7 @@ int radeon_gem_busy_ioctl(struct drm_device *dev, void 
*data,
}
robj = gem_to_radeon_bo(gobj);
 
-   r = reservation_object_test_signaled_rcu(robj->tbo.resv, true);
+   r = reservation_object_test_signaled_rcu(robj->tbo.base.resv, true);
if (r == 0)
r = -EBUSY;
else
@@ -478,7 +478,7 @@ int radeon_gem_wait_idle_ioctl(struct drm_device *dev, void 
*data,
}
robj = gem_to_radeon_bo(gobj);
 
-   ret = reservation_object_wait_timeout_rcu(robj->tbo.resv, true, true, 
30 * HZ);
+   ret = reservation_object_wait_timeout_rcu(robj->tbo.base.resv, true, 
true, 30 * HZ);
if (ret == 0)
r = -EBUSY;
else if (ret < 0)
diff --git a/drivers/gpu/drm/radeon/radeon_mn.c 
b/drivers/gpu/drm/radeon/radeon_mn.c
index 8c3871ed23a9..0d64ace0e6c1 100644
--- a/drivers/gpu/drm/radeon/radeon_mn.c
+++ b/drivers/gpu/drm/radeon/radeon_mn.c
@@ -163,7 +163,7 @@ static int radeon_mn_invalidate_range_start(struct 
mmu_notifier *mn,

[PATCH v3 04/18] drm/radeon: use embedded gem object

2019-06-28 Thread Gerd Hoffmann
Drop drm_gem_object from radeon_bo, use the
ttm_buffer_object.base instead.

Build tested only.

Signed-off-by: Gerd Hoffmann 
Reviewed-by: Christian König 
---
 drivers/gpu/drm/radeon/radeon.h |  3 +--
 drivers/gpu/drm/radeon/radeon_cs.c  |  2 +-
 drivers/gpu/drm/radeon/radeon_display.c |  4 ++--
 drivers/gpu/drm/radeon/radeon_gem.c |  2 +-
 drivers/gpu/drm/radeon/radeon_object.c  | 16 
 drivers/gpu/drm/radeon/radeon_prime.c   |  2 +-
 drivers/gpu/drm/radeon/radeon_ttm.c |  2 +-
 7 files changed, 15 insertions(+), 16 deletions(-)

diff --git a/drivers/gpu/drm/radeon/radeon.h b/drivers/gpu/drm/radeon/radeon.h
index 32808e50be12..3f7701321d21 100644
--- a/drivers/gpu/drm/radeon/radeon.h
+++ b/drivers/gpu/drm/radeon/radeon.h
@@ -505,7 +505,6 @@ struct radeon_bo {
struct list_headva;
/* Constant after initialization */
struct radeon_device*rdev;
-   struct drm_gem_object   gem_base;
 
struct ttm_bo_kmap_obj  dma_buf_vmap;
pid_t   pid;
@@ -513,7 +512,7 @@ struct radeon_bo {
struct radeon_mn*mn;
struct list_headmn_list;
 };
-#define gem_to_radeon_bo(gobj) container_of((gobj), struct radeon_bo, gem_base)
+#define gem_to_radeon_bo(gobj) container_of((gobj), struct radeon_bo, tbo.base)
 
 int radeon_gem_debugfs_init(struct radeon_device *rdev);
 
diff --git a/drivers/gpu/drm/radeon/radeon_cs.c 
b/drivers/gpu/drm/radeon/radeon_cs.c
index cef0e697a2ea..d206654b31ad 100644
--- a/drivers/gpu/drm/radeon/radeon_cs.c
+++ b/drivers/gpu/drm/radeon/radeon_cs.c
@@ -443,7 +443,7 @@ static void radeon_cs_parser_fini(struct radeon_cs_parser 
*parser, int error, bo
if (bo == NULL)
continue;
 
-   drm_gem_object_put_unlocked(>gem_base);
+   drm_gem_object_put_unlocked(>tbo.base);
}
}
kfree(parser->track);
diff --git a/drivers/gpu/drm/radeon/radeon_display.c 
b/drivers/gpu/drm/radeon/radeon_display.c
index bd52f15e6330..ea6b752dd3a4 100644
--- a/drivers/gpu/drm/radeon/radeon_display.c
+++ b/drivers/gpu/drm/radeon/radeon_display.c
@@ -275,7 +275,7 @@ static void radeon_unpin_work_func(struct work_struct 
*__work)
} else
DRM_ERROR("failed to reserve buffer after flip\n");
 
-   drm_gem_object_put_unlocked(>old_rbo->gem_base);
+   drm_gem_object_put_unlocked(>old_rbo->tbo.base);
kfree(work);
 }
 
@@ -607,7 +607,7 @@ static int radeon_crtc_page_flip_target(struct drm_crtc 
*crtc,
radeon_bo_unreserve(new_rbo);
 
 cleanup:
-   drm_gem_object_put_unlocked(>old_rbo->gem_base);
+   drm_gem_object_put_unlocked(>old_rbo->tbo.base);
dma_fence_put(work->fence);
kfree(work);
return r;
diff --git a/drivers/gpu/drm/radeon/radeon_gem.c 
b/drivers/gpu/drm/radeon/radeon_gem.c
index d8bc5d2dfd61..7238007f5aa4 100644
--- a/drivers/gpu/drm/radeon/radeon_gem.c
+++ b/drivers/gpu/drm/radeon/radeon_gem.c
@@ -83,7 +83,7 @@ int radeon_gem_object_create(struct radeon_device *rdev, 
unsigned long size,
}
return r;
}
-   *obj = >gem_base;
+   *obj = >tbo.base;
robj->pid = task_pid_nr(current);
 
mutex_lock(>gem.mutex);
diff --git a/drivers/gpu/drm/radeon/radeon_object.c 
b/drivers/gpu/drm/radeon/radeon_object.c
index 7a2bad843f8a..66a21332ed4f 100644
--- a/drivers/gpu/drm/radeon/radeon_object.c
+++ b/drivers/gpu/drm/radeon/radeon_object.c
@@ -85,9 +85,9 @@ static void radeon_ttm_bo_destroy(struct ttm_buffer_object 
*tbo)
mutex_unlock(>rdev->gem.mutex);
radeon_bo_clear_surface_reg(bo);
WARN_ON_ONCE(!list_empty(>va));
-   if (bo->gem_base.import_attach)
-   drm_prime_gem_destroy(>gem_base, bo->tbo.sg);
-   drm_gem_object_release(>gem_base);
+   if (bo->tbo.base.import_attach)
+   drm_prime_gem_destroy(>tbo.base, bo->tbo.sg);
+   drm_gem_object_release(>tbo.base);
kfree(bo);
 }
 
@@ -209,7 +209,7 @@ int radeon_bo_create(struct radeon_device *rdev,
bo = kzalloc(sizeof(struct radeon_bo), GFP_KERNEL);
if (bo == NULL)
return -ENOMEM;
-   drm_gem_private_object_init(rdev->ddev, >gem_base, size);
+   drm_gem_private_object_init(rdev->ddev, >tbo.base, size);
bo->rdev = rdev;
bo->surface_reg = -1;
INIT_LIST_HEAD(>list);
@@ -262,7 +262,7 @@ int radeon_bo_create(struct radeon_device *rdev,
r = ttm_bo_init(>mman.bdev, >tbo, size, type,
>placement, page_align, !kernel, acc_size,
sg, resv, _ttm_bo_destroy);
-   bo->gem_base.resv = bo->tbo.resv;
+   bo->tbo.base.resv = bo->tbo.resv;
up_read(>pm.mclk_lock);
if (unlikely(r != 0)) {
return r;
@@ -443,13 +443,13 @@ void 

[PATCH v3 05/18] drm/amdgpu: use embedded gem object

2019-06-28 Thread Gerd Hoffmann
Drop drm_gem_object from amdgpu_bo, use the
ttm_buffer_object.base instead.

Build tested only.

Signed-off-by: Gerd Hoffmann 
Reviewed-by: Christian König 
---
 drivers/gpu/drm/amd/amdgpu/amdgpu_gem.h |  2 +-
 drivers/gpu/drm/amd/amdgpu/amdgpu_object.h  |  1 -
 drivers/gpu/drm/amd/amdgpu/amdgpu_dma_buf.c |  2 +-
 drivers/gpu/drm/amd/amdgpu/amdgpu_gem.c |  8 
 drivers/gpu/drm/amd/amdgpu/amdgpu_object.c  | 10 +-
 drivers/gpu/drm/amd/amdgpu/amdgpu_ttm.c |  2 +-
 6 files changed, 12 insertions(+), 13 deletions(-)

diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_gem.h 
b/drivers/gpu/drm/amd/amdgpu/amdgpu_gem.h
index b8ba6e27c61f..2f17150e26e1 100644
--- a/drivers/gpu/drm/amd/amdgpu/amdgpu_gem.h
+++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_gem.h
@@ -31,7 +31,7 @@
  */
 
 #define AMDGPU_GEM_DOMAIN_MAX  0x3
-#define gem_to_amdgpu_bo(gobj) container_of((gobj), struct amdgpu_bo, gem_base)
+#define gem_to_amdgpu_bo(gobj) container_of((gobj), struct amdgpu_bo, tbo.base)
 
 void amdgpu_gem_object_free(struct drm_gem_object *obj);
 int amdgpu_gem_object_open(struct drm_gem_object *obj,
diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_object.h 
b/drivers/gpu/drm/amd/amdgpu/amdgpu_object.h
index c430e8259038..a80a9972ad16 100644
--- a/drivers/gpu/drm/amd/amdgpu/amdgpu_object.h
+++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_object.h
@@ -94,7 +94,6 @@ struct amdgpu_bo {
/* per VM structure for page tables and with virtual addresses */
struct amdgpu_vm_bo_base*vm_bo;
/* Constant after initialization */
-   struct drm_gem_object   gem_base;
struct amdgpu_bo*parent;
struct amdgpu_bo*shadow;
 
diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_dma_buf.c 
b/drivers/gpu/drm/amd/amdgpu/amdgpu_dma_buf.c
index 02cd845e77b3..4ee452fe0526 100644
--- a/drivers/gpu/drm/amd/amdgpu/amdgpu_dma_buf.c
+++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_dma_buf.c
@@ -393,7 +393,7 @@ amdgpu_gem_prime_import_sg_table(struct drm_device *dev,
bo->prime_shared_count = 1;
 
ww_mutex_unlock(>lock);
-   return >gem_base;
+   return >tbo.base;
 
 error:
ww_mutex_unlock(>lock);
diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_gem.c 
b/drivers/gpu/drm/amd/amdgpu/amdgpu_gem.c
index 37b526c6f494..6d991e8df357 100644
--- a/drivers/gpu/drm/amd/amdgpu/amdgpu_gem.c
+++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_gem.c
@@ -85,7 +85,7 @@ int amdgpu_gem_object_create(struct amdgpu_device *adev, 
unsigned long size,
}
return r;
}
-   *obj = >gem_base;
+   *obj = >tbo.base;
 
return 0;
 }
@@ -690,7 +690,7 @@ int amdgpu_gem_op_ioctl(struct drm_device *dev, void *data,
struct drm_amdgpu_gem_create_in info;
void __user *out = u64_to_user_ptr(args->value);
 
-   info.bo_size = robj->gem_base.size;
+   info.bo_size = robj->tbo.base.size;
info.alignment = robj->tbo.mem.page_alignment << PAGE_SHIFT;
info.domains = robj->preferred_domains;
info.domain_flags = robj->flags;
@@ -820,8 +820,8 @@ static int amdgpu_debugfs_gem_bo_info(int id, void *ptr, 
void *data)
if (pin_count)
seq_printf(m, " pin count %d", pin_count);
 
-   dma_buf = READ_ONCE(bo->gem_base.dma_buf);
-   attachment = READ_ONCE(bo->gem_base.import_attach);
+   dma_buf = READ_ONCE(bo->tbo.base.dma_buf);
+   attachment = READ_ONCE(bo->tbo.base.import_attach);
 
if (attachment)
seq_printf(m, " imported from %p", dma_buf);
diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_object.c 
b/drivers/gpu/drm/amd/amdgpu/amdgpu_object.c
index 7b251fd26bd5..ed2e88208a73 100644
--- a/drivers/gpu/drm/amd/amdgpu/amdgpu_object.c
+++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_object.c
@@ -85,9 +85,9 @@ static void amdgpu_bo_destroy(struct ttm_buffer_object *tbo)
 
amdgpu_bo_kunmap(bo);
 
-   if (bo->gem_base.import_attach)
-   drm_prime_gem_destroy(>gem_base, bo->tbo.sg);
-   drm_gem_object_release(>gem_base);
+   if (bo->tbo.base.import_attach)
+   drm_prime_gem_destroy(>tbo.base, bo->tbo.sg);
+   drm_gem_object_release(>tbo.base);
/* in case amdgpu_device_recover_vram got NULL of bo->parent */
if (!list_empty(>shadow_list)) {
mutex_lock(>shadow_list_lock);
@@ -454,7 +454,7 @@ static int amdgpu_bo_do_create(struct amdgpu_device *adev,
bo = kzalloc(sizeof(struct amdgpu_bo), GFP_KERNEL);
if (bo == NULL)
return -ENOMEM;
-   drm_gem_private_object_init(adev->ddev, >gem_base, size);
+   drm_gem_private_object_init(adev->ddev, >tbo.base, size);
INIT_LIST_HEAD(>shadow_list);
bo->vm_bo = NULL;
bo->preferred_domains = bp->preferred_domain ? bp->preferred_domain :
@@ -505,7 +505,7 @@ static int amdgpu_bo_do_create(struct amdgpu_device *adev,
if 

[PATCH v3 14/18] drm/amdgpu: switch driver from bo->resv to bo->base.resv

2019-06-28 Thread Gerd Hoffmann
Signed-off-by: Gerd Hoffmann 
Reviewed-by: Christian König 
---
 .../gpu/drm/amd/amdgpu/amdgpu_amdkfd_gpuvm.c  |  6 ++--
 drivers/gpu/drm/amd/amdgpu/amdgpu_cs.c|  6 ++--
 drivers/gpu/drm/amd/amdgpu/amdgpu_display.c   |  2 +-
 drivers/gpu/drm/amd/amdgpu/amdgpu_dma_buf.c   |  2 +-
 drivers/gpu/drm/amd/amdgpu/amdgpu_gem.c   |  6 ++--
 drivers/gpu/drm/amd/amdgpu/amdgpu_kms.c   |  2 +-
 drivers/gpu/drm/amd/amdgpu/amdgpu_mn.c|  2 +-
 drivers/gpu/drm/amd/amdgpu/amdgpu_object.c| 22 +++---
 drivers/gpu/drm/amd/amdgpu/amdgpu_ttm.c   |  6 ++--
 drivers/gpu/drm/amd/amdgpu/amdgpu_uvd.c   |  4 +--
 drivers/gpu/drm/amd/amdgpu/amdgpu_vm.c| 30 +--
 drivers/gpu/drm/amd/amdgpu/amdgpu_vm_sdma.c   |  2 +-
 .../gpu/drm/amd/display/amdgpu_dm/amdgpu_dm.c |  2 +-
 13 files changed, 45 insertions(+), 47 deletions(-)

diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_amdkfd_gpuvm.c 
b/drivers/gpu/drm/amd/amdgpu/amdgpu_amdkfd_gpuvm.c
index df26bf34b675..6dce43bd60f1 100644
--- a/drivers/gpu/drm/amd/amdgpu/amdgpu_amdkfd_gpuvm.c
+++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_amdkfd_gpuvm.c
@@ -218,7 +218,7 @@ void amdgpu_amdkfd_unreserve_memory_limit(struct amdgpu_bo 
*bo)
 static int amdgpu_amdkfd_remove_eviction_fence(struct amdgpu_bo *bo,
struct amdgpu_amdkfd_fence *ef)
 {
-   struct reservation_object *resv = bo->tbo.resv;
+   struct reservation_object *resv = bo->tbo.base.resv;
struct reservation_object_list *old, *new;
unsigned int i, j, k;
 
@@ -812,7 +812,7 @@ static int process_sync_pds_resv(struct amdkfd_process_info 
*process_info,
struct amdgpu_bo *pd = peer_vm->root.base.bo;
 
ret = amdgpu_sync_resv(NULL,
-   sync, pd->tbo.resv,
+   sync, pd->tbo.base.resv,
AMDGPU_FENCE_OWNER_UNDEFINED, false);
if (ret)
return ret;
@@ -887,7 +887,7 @@ static int init_kfd_vm(struct amdgpu_vm *vm, void 
**process_info,
  AMDGPU_FENCE_OWNER_KFD, false);
if (ret)
goto wait_pd_fail;
-   ret = reservation_object_reserve_shared(vm->root.base.bo->tbo.resv, 1);
+   ret = 
reservation_object_reserve_shared(vm->root.base.bo->tbo.base.resv, 1);
if (ret)
goto reserve_shared_fail;
amdgpu_bo_fence(vm->root.base.bo,
diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_cs.c 
b/drivers/gpu/drm/amd/amdgpu/amdgpu_cs.c
index dc63707e426f..118ec7514277 100644
--- a/drivers/gpu/drm/amd/amdgpu/amdgpu_cs.c
+++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_cs.c
@@ -402,7 +402,7 @@ static int amdgpu_cs_bo_validate(struct amdgpu_cs_parser *p,
struct ttm_operation_ctx ctx = {
.interruptible = true,
.no_wait_gpu = false,
-   .resv = bo->tbo.resv,
+   .resv = bo->tbo.base.resv,
.flags = 0
};
uint32_t domain;
@@ -734,7 +734,7 @@ static int amdgpu_cs_sync_rings(struct amdgpu_cs_parser *p)
 
list_for_each_entry(e, >validated, tv.head) {
struct amdgpu_bo *bo = ttm_to_amdgpu_bo(e->tv.bo);
-   struct reservation_object *resv = bo->tbo.resv;
+   struct reservation_object *resv = bo->tbo.base.resv;
 
r = amdgpu_sync_resv(p->adev, >job->sync, resv, p->filp,
 amdgpu_bo_explicit_sync(bo));
@@ -1732,7 +1732,7 @@ int amdgpu_cs_find_mapping(struct amdgpu_cs_parser 
*parser,
*map = mapping;
 
/* Double check that the BO is reserved by this CS */
-   if (READ_ONCE((*bo)->tbo.resv->lock.ctx) != >ticket)
+   if (READ_ONCE((*bo)->tbo.base.resv->lock.ctx) != >ticket)
return -EINVAL;
 
if (!((*bo)->flags & AMDGPU_GEM_CREATE_VRAM_CONTIGUOUS)) {
diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_display.c 
b/drivers/gpu/drm/amd/amdgpu/amdgpu_display.c
index 535650967b1a..b5d020e15c35 100644
--- a/drivers/gpu/drm/amd/amdgpu/amdgpu_display.c
+++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_display.c
@@ -204,7 +204,7 @@ int amdgpu_display_crtc_page_flip_target(struct drm_crtc 
*crtc,
goto unpin;
}
 
-   r = reservation_object_get_fences_rcu(new_abo->tbo.resv, >excl,
+   r = reservation_object_get_fences_rcu(new_abo->tbo.base.resv, 
>excl,
  >shared_count,
  >shared);
if (unlikely(r != 0)) {
diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_dma_buf.c 
b/drivers/gpu/drm/amd/amdgpu/amdgpu_dma_buf.c
index 4ee452fe0526..5e3a08325017 100644
--- a/drivers/gpu/drm/amd/amdgpu/amdgpu_dma_buf.c
+++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_dma_buf.c
@@ -216,7 +216,7 @@ static int amdgpu_dma_buf_map_attach(struct dma_buf 
*dma_buf,
 * fences on the reservation object 

Re: [PATCH 2/2] dma-buf: cleanup shared fence removal

2019-06-28 Thread Christian König

Am 28.06.19 um 09:30 schrieb Daniel Vetter:

On Fri, Jun 28, 2019 at 8:32 AM Koenig, Christian
 wrote:

Am 27.06.19 um 21:57 schrieb Daniel Vetter:

[SNIP]

Again, the reason to remove the fence from one reservation object is
simply that it is faster to remove it from one object than to attach a
new fence to all other objects.

Hm I guess I was lead astray by the eviction_fence invalidation thing
in enable_signaling, and a few other places that freed the bo right
afters (like amdgpu_amdkfd_gpuvm_free_memory_of_gpu), where removing
the fences first and then freeing the bo is kinda pointless.

AH! Now I know where your missing puzzle piece is.

See when we free a buffer object TTM puts it on the delayed delete list
to make sure that it gets freed up only after all fences are signaled.

So this is essentially to make sure the BO gets freed up immediately.

Well yeah you have to wait for outstanding rendering. Everyone does
that. The problem is that ->eviction_fence does not represent
rendering, it's a very contrived way to implement the ->notify_move
callback.

For real fences you have to wait for rendering to complete, putting
aside corner cases like destroying an entire context with all its
pending rendering. Not really something we should optimize for I
think.


No, real fences I don't need to wait for the rendering to complete either.

If userspace said that this per process resource is dead and can be 
removed, we don't need to wait for it to become idle.



Now with your insistence that I'm getting something wrong I guess the
you're talking about the unbind case, where the bo survives, but it's
mapping disappears, and hence that specific eviction_fence needs to be
removed.
And yeah there I guess just removing the magic eviction fence is
cheaper than replacing all the others.

If possible I actually want to apply this to the general case of freeing
up per process resources.

In other words when we don't track resource usage on a per submission
basis freeing up resources is costly because we always have to wait for
the last submission.

But if we can prevent further access to the resource using page tables
it is perfectly valid to free it as soon as the page tables are up to date.

Still not seeing how you can use this outside of the magic amdkfd
eviction_fence.


As I explained you have a per process resource and userspace says that 
this one can go away.


As far as I can see it is perfectly valid to remove all fences from this 
process as soon as the page tables are up to date.



So with the magic amdkfd_eviction fence I agree this makes sense. The
problem I'm having here is that the magic eviction fence itself
doesn't make sense. What I expect will happen (in terms of the new
dynamic dma-buf stuff, I don't have the ttm-ism ready to explain it in
those concepts).

- when you submit command buffers, you _dont_ attach fences to all
involved buffers


That's not going to work because then the memory management then thinks 
that the buffer is immediately movable, which it isn't,



- when you get a ->notify_move there's currently 2 options:
1. inefficient way: wait for the latest command buffer to complete, as
a defensive move. To do that you attach the fence from that command
buffer to the obj in your notifiy_move callback (the kerneldoc doesn't
explain this, but I think we really need this).
2. efficient way: You just unmap from pagetables (and eat/handle the
fault if there is any).


Exactly yeah. As far as I can see for freeing things up that is a 
perfectly valid approach as long as you have a VM which prevents 
accessing this memory.


See ttm_bo_individualize_resv() as well, here we do something similar 
for GFX what the KFD does when it releases memory.


E.g. for per process resources we copy over the current fences into an 
individualized reservation object, to make sure that this can be freed 
up at some time in the future.


But I really want to go a step further and say ok, all fences from this 
process can be removed after we updated the page tables.



No where do you need to remove a fence, because you never attached a
bogus fence.

Now with the magic eviction trick amdkfd uses, you can't do that,
because you need to attach this magic fence to all buffers, all the
time. And since it still needs to work like a fence it's one-shot
only, i.e. instead of a reuseable ->notify_move callback you need to
create a new fence every time ->enable_signalling is called. So in a
way replacing fences is just an artifact of some very, very crazy
calling convention.

If you have a real callback, there's no need for cycling through
fences, and therefore there's also no need to optimize their removal.

Or did you work under the assumption that ->notify_move cannot attach
new fences, and therefore you'd have to roll this magic fence trick to
even more places?


Well that notify_move approach was what was initially suggested by the 
KFD team as well. The problem is simply that there is no general 
notify_move 

XDC 2019: 10 days left to submit your talks!

2019-06-28 Thread Mark Filion
Hello!

Onlyh 10 days to go to submit your talks, workshops or demos for this
year's X.Org Developer Conference, which will be taking place in
beautiful Montréal, Canada on October 2-4, 2019!

Whether it's the Linux kernel, Mesa, DRM, Wayland or X11, if it's
related to the Open Source graphics stack, please send it in!

Head to the XDC website to learn more: 

https://xdc2019.x.org/

The deadline for submissions Sunday, 7 July 2019.

Best,

Mark

___
amd-gfx mailing list
amd-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/amd-gfx

Re: [PATCH 2/2] dma-buf: cleanup shared fence removal

2019-06-28 Thread Daniel Vetter
On Fri, Jun 28, 2019 at 8:32 AM Koenig, Christian
 wrote:
> Am 27.06.19 um 21:57 schrieb Daniel Vetter:
> > [SNIP]
> >> Again, the reason to remove the fence from one reservation object is
> >> simply that it is faster to remove it from one object than to attach a
> >> new fence to all other objects.
> > Hm I guess I was lead astray by the eviction_fence invalidation thing
> > in enable_signaling, and a few other places that freed the bo right
> > afters (like amdgpu_amdkfd_gpuvm_free_memory_of_gpu), where removing
> > the fences first and then freeing the bo is kinda pointless.
>
> AH! Now I know where your missing puzzle piece is.
>
> See when we free a buffer object TTM puts it on the delayed delete list
> to make sure that it gets freed up only after all fences are signaled.
>
> So this is essentially to make sure the BO gets freed up immediately.

Well yeah you have to wait for outstanding rendering. Everyone does
that. The problem is that ->eviction_fence does not represent
rendering, it's a very contrived way to implement the ->notify_move
callback.

For real fences you have to wait for rendering to complete, putting
aside corner cases like destroying an entire context with all its
pending rendering. Not really something we should optimize for I
think.

> > Now with your insistence that I'm getting something wrong I guess the
> > you're talking about the unbind case, where the bo survives, but it's
> > mapping disappears, and hence that specific eviction_fence needs to be
> > removed.
> > And yeah there I guess just removing the magic eviction fence is
> > cheaper than replacing all the others.
>
> If possible I actually want to apply this to the general case of freeing
> up per process resources.
>
> In other words when we don't track resource usage on a per submission
> basis freeing up resources is costly because we always have to wait for
> the last submission.
>
> But if we can prevent further access to the resource using page tables
> it is perfectly valid to free it as soon as the page tables are up to date.

Still not seeing how you can use this outside of the magic amdkfd
eviction_fence.

So with the magic amdkfd_eviction fence I agree this makes sense. The
problem I'm having here is that the magic eviction fence itself
doesn't make sense. What I expect will happen (in terms of the new
dynamic dma-buf stuff, I don't have the ttm-ism ready to explain it in
those concepts).

- when you submit command buffers, you _dont_ attach fences to all
involved buffers
- when you get a ->notify_move there's currently 2 options:
1. inefficient way: wait for the latest command buffer to complete, as
a defensive move. To do that you attach the fence from that command
buffer to the obj in your notifiy_move callback (the kerneldoc doesn't
explain this, but I think we really need this).
2. efficient way: You just unmap from pagetables (and eat/handle the
fault if there is any).

No where do you need to remove a fence, because you never attached a
bogus fence.

Now with the magic eviction trick amdkfd uses, you can't do that,
because you need to attach this magic fence to all buffers, all the
time. And since it still needs to work like a fence it's one-shot
only, i.e. instead of a reuseable ->notify_move callback you need to
create a new fence every time ->enable_signalling is called. So in a
way replacing fences is just an artifact of some very, very crazy
calling convention.

If you have a real callback, there's no need for cycling through
fences, and therefore there's also no need to optimize their removal.

Or did you work under the assumption that ->notify_move cannot attach
new fences, and therefore you'd have to roll this magic fence trick to
even more places?

> > Now I guess I understand the mechanics of this somewhat, and what
> > you're doing, and lit ooks even somewhat safe. But I have no idea what
> > this is supposed to achieve. It feels a bit like ->notify_move, but
> > implemented in the most horrible way possible. Or maybe something
> > else.
> >
> > Really no idea.
> >
> > And given that we've wasted I few pages full of paragraphs already on
> > trying to explain what your new little helper is for, when it's safe
> > to use, when it's maybe not a good idea, and we still haven't even
> > bottomed out on what this is for ... well I really don't think it's a
> > good idea to inflict this into core code. Because just blindly
> > removing normal fences is not safe.
> >
> > Especially with like half a sentence of kerneldoc that explains
> > nothing of all this complexity.

Still makes no sense to me to have in core code.
-Daniel
-- 
Daniel Vetter
Software Engineer, Intel Corporation
+41 (0) 79 365 57 48 - http://blog.ffwll.ch
___
amd-gfx mailing list
amd-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/amd-gfx

Re: [PATCH] drm/amd/powerplay: update smu11_driver_if_navi10.h

2019-06-28 Thread Yin, Tianci (Rico)
Thanks Alex!

From: Deucher, Alexander
Sent: Thursday, June 27, 2019 23:38
To: Yin, Tianci (Rico); amd-gfx@lists.freedesktop.org
Cc: Xiao, Jack; Zhang, Hawking
Subject: Re: [PATCH] drm/amd/powerplay: update smu11_driver_if_navi10.h

Acked-by: Alex Deucher 

From: amd-gfx  on behalf of Tianci Yin 

Sent: Thursday, June 27, 2019 2:49 AM
To: amd-gfx@lists.freedesktop.org
Cc: Xiao, Jack; Yin, Tianci (Rico); Zhang, Hawking
Subject: [PATCH] drm/amd/powerplay: update smu11_driver_if_navi10.h

From: tiancyin 

update the smu11_driver_if_navi10.h since navi10 smu fw
update to 42.28

Signed-off-by: tiancyin 
---
 drivers/gpu/drm/amd/powerplay/inc/smu11_driver_if_navi10.h | 6 +++---
 drivers/gpu/drm/amd/powerplay/navi10_ppt.c | 2 +-
 2 files changed, 4 insertions(+), 4 deletions(-)

diff --git a/drivers/gpu/drm/amd/powerplay/inc/smu11_driver_if_navi10.h 
b/drivers/gpu/drm/amd/powerplay/inc/smu11_driver_if_navi10.h
index a8b31bc..adbbfeb 100644
--- a/drivers/gpu/drm/amd/powerplay/inc/smu11_driver_if_navi10.h
+++ b/drivers/gpu/drm/amd/powerplay/inc/smu11_driver_if_navi10.h
@@ -26,7 +26,7 @@
 // *** IMPORTANT ***
 // SMU TEAM: Always increment the interface version if
 // any structure is changed in this file
-#define SMU11_DRIVER_IF_VERSION 0x32
+#define SMU11_DRIVER_IF_VERSION 0x33

 #define PPTABLE_NV10_SMU_VERSION 8

@@ -813,8 +813,8 @@ typedef struct {
   uint16_t UclkAverageLpfTau;
   uint16_t GfxActivityLpfTau;
   uint16_t UclkActivityLpfTau;
+  uint16_t SocketPowerLpfTau;

-  uint16_t Padding;
   // Padding - ignore
   uint32_t MmHubPadding[8]; // SMU internal use
 } DriverSmuConfig_t;
@@ -853,7 +853,7 @@ typedef struct {
   uint8_t  CurrGfxVoltageOffset  ;
   uint8_t  CurrMemVidOffset  ;
   uint8_t  Padding8  ;
-  uint16_t CurrSocketPower   ;
+  uint16_t AverageSocketPower;
   uint16_t TemperatureEdge   ;
   uint16_t TemperatureHotspot;
   uint16_t TemperatureMem;
diff --git a/drivers/gpu/drm/amd/powerplay/navi10_ppt.c 
b/drivers/gpu/drm/amd/powerplay/navi10_ppt.c
index 99566de..373aeba 100644
--- a/drivers/gpu/drm/amd/powerplay/navi10_ppt.c
+++ b/drivers/gpu/drm/amd/powerplay/navi10_ppt.c
@@ -863,7 +863,7 @@ static int navi10_get_gpu_power(struct smu_context *smu, 
uint32_t *value)
 if (ret)
 return ret;

-   *value = metrics.CurrSocketPower << 8;
+   *value = metrics.AverageSocketPower << 8;

 return 0;
 }
--
2.7.4

___
amd-gfx mailing list
amd-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/amd-gfx
___
amd-gfx mailing list
amd-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/amd-gfx

Re: [RFC PATCH v3 07/11] drm, cgroup: Add TTM buffer allocation stats

2019-06-28 Thread Daniel Vetter
On Fri, Jun 28, 2019 at 3:16 AM Welty, Brian  wrote:
> On 6/26/2019 11:01 PM, Daniel Vetter wrote:
> > On Thu, Jun 27, 2019 at 12:06:13AM -0400, Kenny Ho wrote:
> >> On Wed, Jun 26, 2019 at 12:12 PM Daniel Vetter  wrote:
> >>>
> >>> I think with all the ttm refactoring going on I think we need to de-ttm
> >>> the interface functions here a bit. With Gerd Hoffmans series you can just
> >>> use a gem_bo pointer here, so what's left to do is have some extracted
> >>> structure for tracking memory types. I think Brian Welty has some ideas
> >>> for this, even in patch form. Would be good to keep him on cc at least for
> >>> the next version. We'd need to explicitly hand in the ttm_mem_reg (or
> >>> whatever the specific thing is going to be).
> >>
> >> I assume Gerd Hoffman's series you are referring to is this one?
> >> https://www.spinics.net/lists/dri-devel/msg215056.html
> >
> > There's a newer one, much more complete, but yes that's the work.
> >
> >> I can certainly keep an eye out for Gerd's refactoring while
> >> refactoring other parts of this RFC.
> >>
> >> I have added Brian and Gerd to the thread for awareness.
> >
> > btw just realized that maybe building the interfaces on top of ttm_mem_reg
> > is maybe not the best. That's what you're using right now, but in a way
> > that's just the ttm internal detail of how the backing storage is
> > allocated. I think the structure we need to abstract away is
> > ttm_mem_type_manager, without any of the actual management details.
> >
>
> Any de-ttm refactoring should probably not spam all the cgroups folks.
> So I removed cgroups list.
>
> As Daniel mentioned, some of us are looking at possible refactoring of TTM
> for reuse in i915 driver.
> Here is a brief summary of some ideas to be considered:
>
>  1) refactor part of ttm_mem_type_manager into a new drm_mem_type_region.
> Really, should then move the array from ttm_bo_device.man[] into 
> drm_device.
>
> Relevant to drm_cgroup, you could then perhaps access these stats through
> drm_device and don't need the mem_stats array in drmcgrp_device_resource.
>
>   1a)  doing this right means replacing TTM_PL_XXX memory types with new DRM
>  defines.  But could keep the TTM ones as redefinition of (new) DRM ones.
>  Probably those private ones (TTM_PL_PRIV) make this difficult.
>
>   All of the above could be eventually leveraged by the vram support being
>   implemented now in i915 driver.
>
>   2) refactor ttm_mem_reg + ttm_bus_placement into something generic for
>  any GEM object,  maybe call it drm_gem_object_placement.
>  ttm_mem_reg could remain as a wrapper for TTM drivers.
>  This hasn't been broadly discussed with intel-gfx folks, so not sure
>  this fits well into i915 or not.
>
>  Relevant to drm_cgroup, maybe this function:
> drmcgrp_mem_track_move(struct ttm_buffer_object *old_bo, bool evict,
> struct ttm_mem_reg *new_mem)
>  could potentially become:
> drmcgrp_mem_track_move(struct drm_gem_object *old_bo, bool evict,
> struct drm_gem_object_placement *new_place)
>
>  Though from ttm_mem_reg, you look to only be using mem_type and size.
>  I think Daniel is noting that ttm_mem_reg wasn't truly needed here, so
>  you could just pass in the mem_type and size instead.

Yeah I think the relevant part of your refactoring is creating a more
abstract memory type/resource thing (not the individual allocations
from it which ttm calls regions and I get confused about that every
time). I think that abstraction should also have a field for the total
(which I think cgroups also needs, both as read-only information and
starting value). ttm would that put somewhere into the
ttm_mem_type_manager, i915 would put it somewhere else, cma based
drivers could perhaps expose the cma heap like that (if it's exclusive
to the gpu at least).

> Would appreciate any feedback (positive or negative) on above
> Perhaps this should move to a new thread?   I could send out basic RFC
> patches for (1) if helpful but as it touches all the TTM drivers, nice to
> hear some feedback first.
> Anyway, this doesn't necessarily need to block forward progress on drm_cgroup,
> as refactoring into common base structures could happen incrementally.

Yeah I think new dri-devel thread with a totally unpolished rfc as
draft plus this as the intro would be good. That way we can ground
this a bit better in actual code.
-Daniel
-- 
Daniel Vetter
Software Engineer, Intel Corporation
+41 (0) 79 365 57 48 - http://blog.ffwll.ch
___
amd-gfx mailing list
amd-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/amd-gfx

Re: [PATCH v2] drm/amdgpu: fix scheduler timeout calc

2019-06-28 Thread Koenig, Christian
Am 28.06.19 um 07:32 schrieb Cui, Flora:
> 在 6/27/2019 6:17 PM, Christian König 写道:
>> Am 27.06.19 um 12:03 schrieb Cui, Flora:
>>> scheduler timeout is in jiffies
>>> v2: move timeout check to amdgpu_device_get_job_timeout_settings after
>>> parsing the value
>>>
>>> Change-Id: I26708c163db943ff8d930dd81bcab4b4b9d84eb2
>>> Signed-off-by: Flora Cui 
>>> ---
>>>    drivers/gpu/drm/amd/amdgpu/amdgpu_drv.c | 7 +--
>>>    1 file changed, 5 insertions(+), 2 deletions(-)
>>>
>>> diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_drv.c
>>> b/drivers/gpu/drm/amd/amdgpu/amdgpu_drv.c
>>> index e74a175..cc29d70 100644
>>> --- a/drivers/gpu/drm/amd/amdgpu/amdgpu_drv.c
>>> +++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_drv.c
>>> @@ -1300,7 +1300,9 @@ int
>>> amdgpu_device_get_job_timeout_settings(struct amdgpu_device *adev)
>>>     * By default timeout for non compute jobs is 1.
>>>     * And there is no timeout enforced on compute jobs.
>>>     */
>>> -    adev->gfx_timeout = adev->sdma_timeout = adev->video_timeout =
>>> 1;
>>> +    adev->gfx_timeout = \
>>> +    adev->sdma_timeout = \
>>> +    adev->video_timeout = msecs_to_jiffies(1);
>> Of hand that looks very odd to me. This is not a macro so why the \ here?
> will update in v3
>>>    adev->compute_timeout = MAX_SCHEDULE_TIMEOUT;
>>>      if (strnlen(input, AMDGPU_MAX_TIMEOUT_PARAM_LENTH)) {
>>> @@ -1314,7 +1316,8 @@ int
>>> amdgpu_device_get_job_timeout_settings(struct amdgpu_device *adev)
>>>    if (timeout <= 0) {
>>>    index++;
>>>    continue;
>>> -    }
>>> +    } else
>>> +    timeout = msecs_to_jiffies(timeout);
>> You can actually remove the "if (timeout <= 0)" as well,
>> msecs_to_jiffies will do the right thing for negative values.
> IMHO check for timeout==0 is still needed. msecs_to_jiffies() would
> return 0 and that's not desired for scheduler timer.

Good point, so 0 would use the default value and negative values would 
use infinity.

That sounds like a good solution to me,
Christian.

>> Christian.
>>
>>>      switch (index++) {
>>>    case 0:

___
amd-gfx mailing list
amd-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/amd-gfx

Re: [PATCH 2/2] dma-buf: cleanup shared fence removal

2019-06-28 Thread Koenig, Christian
Am 27.06.19 um 21:57 schrieb Daniel Vetter:
> [SNIP]
>> Again, the reason to remove the fence from one reservation object is
>> simply that it is faster to remove it from one object than to attach a
>> new fence to all other objects.
> Hm I guess I was lead astray by the eviction_fence invalidation thing
> in enable_signaling, and a few other places that freed the bo right
> afters (like amdgpu_amdkfd_gpuvm_free_memory_of_gpu), where removing
> the fences first and then freeing the bo is kinda pointless.

AH! Now I know where your missing puzzle piece is.

See when we free a buffer object TTM puts it on the delayed delete list 
to make sure that it gets freed up only after all fences are signaled.

So this is essentially to make sure the BO gets freed up immediately.

> Now with your insistence that I'm getting something wrong I guess the
> you're talking about the unbind case, where the bo survives, but it's
> mapping disappears, and hence that specific eviction_fence needs to be
> removed.
> And yeah there I guess just removing the magic eviction fence is
> cheaper than replacing all the others.

If possible I actually want to apply this to the general case of freeing 
up per process resources.

In other words when we don't track resource usage on a per submission 
basis freeing up resources is costly because we always have to wait for 
the last submission.

But if we can prevent further access to the resource using page tables 
it is perfectly valid to free it as soon as the page tables are up to date.

Regards,
Christian.

>
> Now I guess I understand the mechanics of this somewhat, and what
> you're doing, and lit ooks even somewhat safe. But I have no idea what
> this is supposed to achieve. It feels a bit like ->notify_move, but
> implemented in the most horrible way possible. Or maybe something
> else.
>
> Really no idea.
>
> And given that we've wasted I few pages full of paragraphs already on
> trying to explain what your new little helper is for, when it's safe
> to use, when it's maybe not a good idea, and we still haven't even
> bottomed out on what this is for ... well I really don't think it's a
> good idea to inflict this into core code. Because just blindly
> removing normal fences is not safe.
>
> Especially with like half a sentence of kerneldoc that explains
> nothing of all this complexity.
> -Daniel

___
amd-gfx mailing list
amd-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/amd-gfx