[Freedreno] [PATCH] drm: msm: add null pointer check

2022-03-29 Thread cgel . zte
From: Lv Ruyi 

kzalloc is a memory allocation function which can return NULL when some
internal memory errors happen. Add null pointer check to avoid
dereferencing null pointer.

Reported-by: Zeal Robot 
Signed-off-by: Lv Ruyi 
---
 drivers/gpu/drm/msm/disp/msm_disp_snapshot_util.c | 2 ++
 1 file changed, 2 insertions(+)

diff --git a/drivers/gpu/drm/msm/disp/msm_disp_snapshot_util.c 
b/drivers/gpu/drm/msm/disp/msm_disp_snapshot_util.c
index 5d2ff6791058..acfe1b31e079 100644
--- a/drivers/gpu/drm/msm/disp/msm_disp_snapshot_util.c
+++ b/drivers/gpu/drm/msm/disp/msm_disp_snapshot_util.c
@@ -176,6 +176,8 @@ void msm_disp_snapshot_add_block(struct msm_disp_state 
*disp_state, u32 len,
va_list va;
 
new_blk = kzalloc(sizeof(struct msm_disp_state_block), GFP_KERNEL);
+   if (!new_blk)
+   return;
 
va_start(va, fmt);
 
-- 
2.25.1



Re: [Freedreno] [PATCH] dt-bindings: display: msm: dsi: remove address/size cells

2022-03-29 Thread Rob Herring
On Tue, Mar 29, 2022 at 12:01:52PM +0530, Vinod Koul wrote:
> On 28-03-22, 13:21, Rob Herring wrote:
> > On Mon, Mar 28, 2022 at 12:18 PM Krzysztof Kozlowski
> >  wrote:
> > >
> > > On 28/03/2022 19:16, Vinod Koul wrote:
> > > > On 28-03-22, 19:43, Dmitry Baryshkov wrote:
> > > >> On Mon, 28 Mar 2022 at 18:30, Krzysztof Kozlowski
> > > >>  wrote:
> > > >>>
> > > >>> The DSI node is not a bus and the children do not have unit addresses.
> > > >>>
> > > >>> Reported-by: Vinod Koul 
> > > >>> Signed-off-by: Krzysztof Kozlowski 
> > > >>
> > > >> NAK.
> > > >> DSI panels are children of the DSI device tree node with the reg = 
> > > >> <0>; address.
> > > >> This is the convention used by other platforms too (see e.g.
> > > >> arch/arm64/boot/dts/freescale/imx8mq-evk.dts).
> > > >
> > > > So we should add reg = 0, i will update my dtsi fix
> > > >
> > >
> > > To "ports" node? No. The reg=0 is for children of the bus, so the
> > > panels. How to combine both without warnings - ports and panel@0 - I
> > > don't know yet...
> > 
> > I don't think that should case a warning. Or at least it's one we turn off.
> 
> Well in this case I think we might need a fix:
> Here is the example quoted in the binding. We have ports{} and then the
> two port@0 and port@1 underneath.

It's the #address-cells/#size-cells under 'ports' that applies to 'port' 
nodes. As 'ports' has no address (reg) itself, it doesn't need 
#address-cells/#size-cells in its parent node.

> 
> So it should be okay to drop #address-cells/#size-cells from dsi node
> but keep in ports node...

Yes.

> Thoughts...?

But I thought a panel@0 node was being added? If so then you need to add 
them back.

Rob


Re: [Freedreno] [PATCH v5 2/4] drm: introduce drm_writeback_connector_init_with_encoder() API

2022-03-29 Thread Abhinav Kumar

Hi Liviu

Gentle reminder ... Can you please help to clarify the last set of 
questions so that I can work on the next version?


Thanks

Abhinav

On 3/25/2022 9:31 AM, Abhinav Kumar wrote:

Hi Liviu

On 3/25/2022 3:19 AM, Liviu Dudau wrote:

On Thu, Mar 24, 2022 at 09:36:50AM -0700, Abhinav Kumar wrote:

Hi Liviu


Hello,



Thanks for the response.

On 3/24/2022 3:12 AM, Liviu Dudau wrote:

On Wed, Mar 23, 2022 at 11:28:56AM -0700, Abhinav Kumar wrote:

Hi Liviu


Hello,



Thanks for the review.

On 3/23/2022 9:46 AM, Liviu Dudau wrote:

On Mon, Mar 21, 2022 at 04:56:43PM -0700, Abhinav Kumar wrote:

For vendors drivers which pass an already allocated and
initialized encoder especially for cases where the encoder
hardware is shared OR the writeback encoder shares the resources
with the rest of the display pipeline introduce a new API,
drm_writeback_connector_init_with_encoder() which expects
an initialized encoder as a parameter and only sets up the
writeback connector.

changes in v5:
- reorder this change to come before in the series
  to avoid incorrect functionality in subsequent changes
- continue using struct drm_encoder instead of
  struct drm_encoder * and switch it in next change

Signed-off-by: Abhinav Kumar 


Hi Abhinav,


---
    drivers/gpu/drm/drm_writeback.c | 143 


    include/drm/drm_writeback.h |   5 ++
    2 files changed, 106 insertions(+), 42 deletions(-)

diff --git a/drivers/gpu/drm/drm_writeback.c 
b/drivers/gpu/drm/drm_writeback.c

index dc2ef12..abe78b9 100644
--- a/drivers/gpu/drm/drm_writeback.c
+++ b/drivers/gpu/drm/drm_writeback.c
@@ -149,37 +149,15 @@ static const struct drm_encoder_funcs 
drm_writeback_encoder_funcs = {

    .destroy = drm_encoder_cleanup,
    };
-/**
- * drm_writeback_connector_init - Initialize a writeback 
connector and its properties

- * @dev: DRM device
- * @wb_connector: Writeback connector to initialize
- * @con_funcs: Connector funcs vtable
- * @enc_helper_funcs: Encoder helper funcs vtable to be used by 
the internal encoder
- * @formats: Array of supported pixel formats for the writeback 
engine

- * @n_formats: Length of the formats array
- * @possible_crtcs: possible crtcs for the internal writeback 
encoder

- *
- * This function creates the writeback-connector-specific 
properties if they

- * have not been already created, initializes the connector as
- * type DRM_MODE_CONNECTOR_WRITEBACK, and correctly initializes 
the property
- * values. It will also create an internal encoder associated 
with the
- * drm_writeback_connector and set it to use the 
@enc_helper_funcs vtable for

- * the encoder helper.
- *
- * Drivers should always use this function instead of 
drm_connector_init() to

- * set up writeback connectors.
- *
- * Returns: 0 on success, or a negative error code
- */
-int drm_writeback_connector_init(struct drm_device *dev,
- struct drm_writeback_connector *wb_connector,
- const struct drm_connector_funcs *con_funcs,
- const struct drm_encoder_helper_funcs 
*enc_helper_funcs,
- const u32 *formats, int n_formats, uint32_t 
possible_crtcs)

+static int drm_writeback_connector_setup(struct drm_device *dev,
+    struct drm_writeback_connector *wb_connector,
+    const struct drm_connector_funcs *con_funcs, const u32 
*formats,

+    int n_formats)
    {
    struct drm_property_blob *blob;
-    struct drm_connector *connector = &wb_connector->base;
    struct drm_mode_config *config = &dev->mode_config;
+    struct drm_connector *connector = &wb_connector->base;
+


Point of this reordering the statements is...?
This diff can be avoided. There was no reason to reorder this. I 
will remove

this re-order.



    int ret = create_writeback_properties(dev);
    if (ret != 0)
@@ -187,18 +165,10 @@ int drm_writeback_connector_init(struct 
drm_device *dev,
    blob = drm_property_create_blob(dev, n_formats * 
sizeof(*formats),

    formats);
-    if (IS_ERR(blob))
-    return PTR_ERR(blob);
-
-    drm_encoder_helper_add(&wb_connector->encoder, 
enc_helper_funcs);

-
-    wb_connector->encoder.possible_crtcs = possible_crtcs;
-
-    ret = drm_encoder_init(dev, &wb_connector->encoder,
-   &drm_writeback_encoder_funcs,
-   DRM_MODE_ENCODER_VIRTUAL, NULL);
-    if (ret)
-    goto fail;
+    if (IS_ERR(blob)) {
+    ret = PTR_ERR(blob);
+    return ret;
+    }


I don't see why you have changed the earlier code to store the 
result of PTR_ERR into
ret other than to help your debugging. I suggest that you keep the 
existing code that
returns PTR_ERR(blob) directly and you will have a nicer diff stat 
as well.

Sure, i can fix this for a smaller diff stat.



    connector->interlace_allowed = 0;
@@ -237,13 +207,102 @@ int drm_writeback_connector_init(struct 
drm_device *dev,

    attach_fail:
    drm_connector

Re: [Freedreno] [PATCH] drm: msm: add null pointer check

2022-03-29 Thread Abhinav Kumar

Seems to be a duplicate of

https://patchwork.freedesktop.org/patch/479378/

Thanks

Abhinav

On 3/29/2022 3:34 AM, cgel@gmail.com wrote:

From: Lv Ruyi 

kzalloc is a memory allocation function which can return NULL when some
internal memory errors happen. Add null pointer check to avoid
dereferencing null pointer.

Reported-by: Zeal Robot 
Signed-off-by: Lv Ruyi 
---
  drivers/gpu/drm/msm/disp/msm_disp_snapshot_util.c | 2 ++
  1 file changed, 2 insertions(+)

diff --git a/drivers/gpu/drm/msm/disp/msm_disp_snapshot_util.c 
b/drivers/gpu/drm/msm/disp/msm_disp_snapshot_util.c
index 5d2ff6791058..acfe1b31e079 100644
--- a/drivers/gpu/drm/msm/disp/msm_disp_snapshot_util.c
+++ b/drivers/gpu/drm/msm/disp/msm_disp_snapshot_util.c
@@ -176,6 +176,8 @@ void msm_disp_snapshot_add_block(struct msm_disp_state 
*disp_state, u32 len,
va_list va;
  
  	new_blk = kzalloc(sizeof(struct msm_disp_state_block), GFP_KERNEL);

+   if (!new_blk)
+   return;
  
  	va_start(va, fmt);
  


Re: [Freedreno] [PATCH 1/3] drm/msm/dpu: index dpu_kms->hw_vbif using vbif_idx

2022-03-29 Thread Abhinav Kumar
Assuming this series is newer and supersedes 
https://patchwork.freedesktop.org/patch/464353/?series=97307&rev=2,


please check below.

On 2/16/2022 7:45 PM, Dmitry Baryshkov wrote:

Remove loops over hw_vbif. Instead always VBIF's idx as an index in the
array. This fixes an error in dpu_kms_hw_init(), where we fill
dpu_kms->hw_vbif[i], but check for an error pointer at
dpu_kms->hw_vbif[vbif_idx].

Fixes: 25fdd5933e4c ("drm/msm: Add SDM845 DPU support")
Signed-off-by: Dmitry Baryshkov 
---
  drivers/gpu/drm/msm/disp/dpu1/dpu_kms.c  | 10 
  drivers/gpu/drm/msm/disp/dpu1/dpu_vbif.c | 29 +++-
  2 files changed, 17 insertions(+), 22 deletions(-)

diff --git a/drivers/gpu/drm/msm/disp/dpu1/dpu_kms.c 
b/drivers/gpu/drm/msm/disp/dpu1/dpu_kms.c
index d0653a9ec694..81a35c8d62e7 100644
--- a/drivers/gpu/drm/msm/disp/dpu1/dpu_kms.c
+++ b/drivers/gpu/drm/msm/disp/dpu1/dpu_kms.c
@@ -790,11 +790,9 @@ static void _dpu_kms_hw_destroy(struct dpu_kms *dpu_kms)
_dpu_kms_mmu_destroy(dpu_kms);
  
  	if (dpu_kms->catalog) {

-   for (i = 0; i < dpu_kms->catalog->vbif_count; i++) {
-   u32 vbif_idx = dpu_kms->catalog->vbif[i].id;
-
-   if ((vbif_idx < VBIF_MAX) && dpu_kms->hw_vbif[vbif_idx])
-   dpu_hw_vbif_destroy(dpu_kms->hw_vbif[vbif_idx]);
+   for (i = 0; i < ARRAY_SIZE(dpu_kms->hw_vbif); i++) {
+   if (dpu_kms->hw_vbif[i])
+   dpu_hw_vbif_destroy(dpu_kms->hw_vbif[i]);
}
}
  
@@ -1102,7 +1100,7 @@ static int dpu_kms_hw_init(struct msm_kms *kms)

for (i = 0; i < dpu_kms->catalog->vbif_count; i++) {
u32 vbif_idx = dpu_kms->catalog->vbif[i].id;
  
-		dpu_kms->hw_vbif[i] = dpu_hw_vbif_init(vbif_idx,

+   dpu_kms->hw_vbif[vbif_idx] = dpu_hw_vbif_init(vbif_idx,
dpu_kms->vbif[vbif_idx], dpu_kms->catalog);
if (IS_ERR_OR_NULL(dpu_kms->hw_vbif[vbif_idx])) {
rc = PTR_ERR(dpu_kms->hw_vbif[vbif_idx]);
diff --git a/drivers/gpu/drm/msm/disp/dpu1/dpu_vbif.c 
b/drivers/gpu/drm/msm/disp/dpu1/dpu_vbif.c
index 21d20373eb8b..cbbf77b17fc3 100644
--- a/drivers/gpu/drm/msm/disp/dpu1/dpu_vbif.c
+++ b/drivers/gpu/drm/msm/disp/dpu1/dpu_vbif.c
@@ -11,6 +11,14 @@
  #include "dpu_hw_vbif.h"
  #include "dpu_trace.h"
  
+static struct dpu_hw_vbif *dpu_get_vbif(struct dpu_kms *dpu_kms, enum dpu_vbif vbif_idx)

+{
+   if (vbif_idx > ARRAY_SIZE(dpu_kms->hw_vbif))
+   return dpu_kms->hw_vbif[vbif_idx];


Shouldnt this be

if (vbif_idx < ARRAY_SIZE(dpu_kms->hw_vbif))
return dpu_kms->hw_vbif[vbif_idx];


+
+   return NULL;
+}
+
  /**
   * _dpu_vbif_wait_for_xin_halt - wait for the xin to halt
   * @vbif: Pointer to hardware vbif driver
@@ -148,20 +156,15 @@ static u32 _dpu_vbif_get_ot_limit(struct dpu_hw_vbif 
*vbif,
  void dpu_vbif_set_ot_limit(struct dpu_kms *dpu_kms,
struct dpu_vbif_set_ot_params *params)
  {
-   struct dpu_hw_vbif *vbif = NULL;
+   struct dpu_hw_vbif *vbif;
struct dpu_hw_mdp *mdp;
bool forced_on = false;
u32 ot_lim;
-   int ret, i;
+   int ret;
  
  	mdp = dpu_kms->hw_mdp;
  
-	for (i = 0; i < ARRAY_SIZE(dpu_kms->hw_vbif); i++) {

-   if (dpu_kms->hw_vbif[i] &&
-   dpu_kms->hw_vbif[i]->idx == params->vbif_idx)
-   vbif = dpu_kms->hw_vbif[i];
-   }
-
+   vbif = dpu_get_vbif(dpu_kms, params->vbif_idx);
if (!vbif || !mdp) {
DRM_DEBUG_ATOMIC("invalid arguments vbif %d mdp %d\n",
vbif != NULL, mdp != NULL);
@@ -204,7 +207,7 @@ void dpu_vbif_set_ot_limit(struct dpu_kms *dpu_kms,
  void dpu_vbif_set_qos_remap(struct dpu_kms *dpu_kms,
struct dpu_vbif_set_qos_params *params)
  {
-   struct dpu_hw_vbif *vbif = NULL;
+   struct dpu_hw_vbif *vbif;
struct dpu_hw_mdp *mdp;
bool forced_on = false;
const struct dpu_vbif_qos_tbl *qos_tbl;
@@ -216,13 +219,7 @@ void dpu_vbif_set_qos_remap(struct dpu_kms *dpu_kms,
}
mdp = dpu_kms->hw_mdp;
  
-	for (i = 0; i < ARRAY_SIZE(dpu_kms->hw_vbif); i++) {

-   if (dpu_kms->hw_vbif[i] &&
-   dpu_kms->hw_vbif[i]->idx == params->vbif_idx) {
-   vbif = dpu_kms->hw_vbif[i];
-   break;
-   }
-   }
+   vbif = dpu_get_vbif(dpu_kms, params->vbif_idx);
  
  	if (!vbif || !vbif->cap) {

DPU_ERROR("invalid vbif %d\n", params->vbif_idx);


Re: [Freedreno] [PATCH 2/3] drm/msm/dpu: fix error handling around dpu_hw_vbif_init

2022-03-29 Thread Abhinav Kumar




On 2/16/2022 7:45 PM, Dmitry Baryshkov wrote:

Using IS_ERR_OR_NULL() together with PTR_ERR() is a typical mistake. If
the value is NULL, then the function will return 0 instead of a proper
return code. Moreover dpu_hw_vbif_init() function can not return NULL.
So, replace corresponding IS_ERR_OR_NULL() call with IS_ERR().

Signed-off-by: Dmitry Baryshkov 

Reviewed-by: Abhinav Kumar 

---
  drivers/gpu/drm/msm/disp/dpu1/dpu_kms.c | 4 +---
  1 file changed, 1 insertion(+), 3 deletions(-)

diff --git a/drivers/gpu/drm/msm/disp/dpu1/dpu_kms.c 
b/drivers/gpu/drm/msm/disp/dpu1/dpu_kms.c
index 81a35c8d62e7..f8f1bf3b511d 100644
--- a/drivers/gpu/drm/msm/disp/dpu1/dpu_kms.c
+++ b/drivers/gpu/drm/msm/disp/dpu1/dpu_kms.c
@@ -1102,10 +1102,8 @@ static int dpu_kms_hw_init(struct msm_kms *kms)
  
  		dpu_kms->hw_vbif[vbif_idx] = dpu_hw_vbif_init(vbif_idx,

dpu_kms->vbif[vbif_idx], dpu_kms->catalog);
-   if (IS_ERR_OR_NULL(dpu_kms->hw_vbif[vbif_idx])) {
+   if (IS_ERR(dpu_kms->hw_vbif[vbif_idx])) {
rc = PTR_ERR(dpu_kms->hw_vbif[vbif_idx]);
-   if (!dpu_kms->hw_vbif[vbif_idx])
-   rc = -EINVAL;
DPU_ERROR("failed to init vbif %d: %d\n", vbif_idx, rc);
dpu_kms->hw_vbif[vbif_idx] = NULL;
goto power_error;


Re: [Freedreno] [PATCH 3/3] drm/msm/dpu: drop VBIF indices

2022-03-29 Thread Abhinav Kumar




On 2/16/2022 7:45 PM, Dmitry Baryshkov wrote:

We do not expect to have other VBIFs. Drop VBIF_n indices and always use
VBIF_RT and VBIF_NRT.

Signed-off-by: Dmitry Baryshkov 

Reviewed-by: Abhinav Kumar 

---
  .../gpu/drm/msm/disp/dpu1/dpu_hw_catalog.c|  4 +--
  drivers/gpu/drm/msm/disp/dpu1/dpu_hw_mdss.h   |  6 ++--
  drivers/gpu/drm/msm/disp/dpu1/dpu_vbif.c  | 36 ---
  3 files changed, 28 insertions(+), 18 deletions(-)

diff --git a/drivers/gpu/drm/msm/disp/dpu1/dpu_hw_catalog.c 
b/drivers/gpu/drm/msm/disp/dpu1/dpu_hw_catalog.c
index aa4d20762ccb..dbb853042aa0 100644
--- a/drivers/gpu/drm/msm/disp/dpu1/dpu_hw_catalog.c
+++ b/drivers/gpu/drm/msm/disp/dpu1/dpu_hw_catalog.c
@@ -1115,7 +1115,7 @@ static const struct dpu_vbif_dynamic_ot_cfg 
msm8998_ot_rdwr_cfg[] = {
  
  static const struct dpu_vbif_cfg msm8998_vbif[] = {

{
-   .name = "vbif_0", .id = VBIF_0,
+   .name = "vbif_rt", .id = VBIF_RT,
.base = 0, .len = 0x1040,
.default_ot_rd_limit = 32,
.default_ot_wr_limit = 32,
@@ -1144,7 +1144,7 @@ static const struct dpu_vbif_cfg msm8998_vbif[] = {
  
  static const struct dpu_vbif_cfg sdm845_vbif[] = {

{
-   .name = "vbif_0", .id = VBIF_0,
+   .name = "vbif_rt", .id = VBIF_RT,
.base = 0, .len = 0x1040,
.features = BIT(DPU_VBIF_QOS_REMAP),
.xin_halt_timeout = 0x4000,
diff --git a/drivers/gpu/drm/msm/disp/dpu1/dpu_hw_mdss.h 
b/drivers/gpu/drm/msm/disp/dpu1/dpu_hw_mdss.h
index bb9ceadeb0bb..598c201ae50d 100644
--- a/drivers/gpu/drm/msm/disp/dpu1/dpu_hw_mdss.h
+++ b/drivers/gpu/drm/msm/disp/dpu1/dpu_hw_mdss.h
@@ -254,11 +254,9 @@ enum dpu_wd_timer {
  };
  
  enum dpu_vbif {

-   VBIF_0,
-   VBIF_1,
+   VBIF_RT,
+   VBIF_NRT,
VBIF_MAX,
-   VBIF_RT = VBIF_0,
-   VBIF_NRT = VBIF_1
  };
  
  /**

diff --git a/drivers/gpu/drm/msm/disp/dpu1/dpu_vbif.c 
b/drivers/gpu/drm/msm/disp/dpu1/dpu_vbif.c
index cbbf77b17fc3..c011d4ab6acc 100644
--- a/drivers/gpu/drm/msm/disp/dpu1/dpu_vbif.c
+++ b/drivers/gpu/drm/msm/disp/dpu1/dpu_vbif.c
@@ -19,6 +19,18 @@ static struct dpu_hw_vbif *dpu_get_vbif(struct dpu_kms 
*dpu_kms, enum dpu_vbif v
return NULL;
  }
  
+static const char *dpu_vbif_name(enum dpu_vbif idx)

+{
+   switch (idx) {
+   case VBIF_RT:
+   return "VBIF_RT";
+   case VBIF_NRT:
+   return "VBIF_NRT";
+   default:
+   return "??";
+   }
+}
+
  /**
   * _dpu_vbif_wait_for_xin_halt - wait for the xin to halt
   * @vbif: Pointer to hardware vbif driver
@@ -50,12 +62,12 @@ static int _dpu_vbif_wait_for_xin_halt(struct dpu_hw_vbif 
*vbif, u32 xin_id)
  
  	if (!status) {

rc = -ETIMEDOUT;
-   DPU_ERROR("VBIF %d client %d not halting. TIMEDOUT.\n",
-   vbif->idx - VBIF_0, xin_id);
+   DPU_ERROR("%s client %d not halting. TIMEDOUT.\n",
+   dpu_vbif_name(vbif->idx), xin_id);
} else {
rc = 0;
-   DRM_DEBUG_ATOMIC("VBIF %d client %d is halted\n",
-   vbif->idx - VBIF_0, xin_id);
+   DRM_DEBUG_ATOMIC("%s client %d is halted\n",
+   dpu_vbif_name(vbif->idx), xin_id);
}
  
  	return rc;

@@ -95,8 +107,8 @@ static void _dpu_vbif_apply_dynamic_ot_limit(struct 
dpu_hw_vbif *vbif,
}
}
  
-	DRM_DEBUG_ATOMIC("vbif:%d xin:%d w:%d h:%d fps:%d pps:%llu ot:%u\n",

-   vbif->idx - VBIF_0, params->xin_id,
+   DRM_DEBUG_ATOMIC("%s xin:%d w:%d h:%d fps:%d pps:%llu ot:%u\n",
+   dpu_vbif_name(vbif->idx), params->xin_id,
params->width, params->height, params->frame_rate,
pps, *ot_lim);
  }
@@ -141,8 +153,8 @@ static u32 _dpu_vbif_get_ot_limit(struct dpu_hw_vbif *vbif,
}
  
  exit:

-   DRM_DEBUG_ATOMIC("vbif:%d xin:%d ot_lim:%d\n",
-   vbif->idx - VBIF_0, params->xin_id, ot_lim);
+   DRM_DEBUG_ATOMIC("%s xin:%d ot_lim:%d\n",
+   dpu_vbif_name(vbif->idx), params->xin_id, ot_lim);
return ot_lim;
  }
  
@@ -242,8 +254,8 @@ void dpu_vbif_set_qos_remap(struct dpu_kms *dpu_kms,

forced_on = mdp->ops.setup_clk_force_ctrl(mdp, params->clk_ctrl, true);
  
  	for (i = 0; i < qos_tbl->npriority_lvl; i++) {

-   DRM_DEBUG_ATOMIC("vbif:%d xin:%d lvl:%d/%d\n",
-   params->vbif_idx, params->xin_id, i,
+   DRM_DEBUG_ATOMIC("%s xin:%d lvl:%d/%d\n",
+   dpu_vbif_name(params->vbif_idx), 
params->xin_id, i,
qos_tbl->priority_lvl[i]);
vbif->ops.set_qos_remap(vbif, params->xin_id, i,
qos_tbl->priority_lvl[i]);
@@ -263,8 +275,8 @@ void dpu_vbif_clear_errors(struct dpu_kms *dpu_kms)
if (vbif && vbif->o

[Freedreno] [PATCH 2/9] drm/msm/gpu: Drop duplicate fence counter

2022-03-29 Thread Rob Clark
From: Rob Clark 

The ring seqno counter duplicates the fence-context last_fence counter.
They end up getting incremented in lock-step, on the same scheduler
thread, but the split just makes things less obvious.

Signed-off-by: Rob Clark 
---
 drivers/gpu/drm/msm/adreno/a5xx_gpu.c   | 2 +-
 drivers/gpu/drm/msm/adreno/a6xx_gpu.c   | 2 +-
 drivers/gpu/drm/msm/adreno/adreno_gpu.c | 4 ++--
 drivers/gpu/drm/msm/msm_gpu.c   | 8 
 drivers/gpu/drm/msm/msm_gpu.h   | 2 +-
 drivers/gpu/drm/msm/msm_ringbuffer.h| 1 -
 6 files changed, 9 insertions(+), 10 deletions(-)

diff --git a/drivers/gpu/drm/msm/adreno/a5xx_gpu.c 
b/drivers/gpu/drm/msm/adreno/a5xx_gpu.c
index 407f50a15faa..d31aa87c6c8d 100644
--- a/drivers/gpu/drm/msm/adreno/a5xx_gpu.c
+++ b/drivers/gpu/drm/msm/adreno/a5xx_gpu.c
@@ -1235,7 +1235,7 @@ static void a5xx_fault_detect_irq(struct msm_gpu *gpu)
return;
 
DRM_DEV_ERROR(dev->dev, "gpu fault ring %d fence %x status %8.8X rb 
%4.4x/%4.4x ib1 %16.16llX/%4.4x ib2 %16.16llX/%4.4x\n",
-   ring ? ring->id : -1, ring ? ring->seqno : 0,
+   ring ? ring->id : -1, ring ? ring->fctx->last_fence : 0,
gpu_read(gpu, REG_A5XX_RBBM_STATUS),
gpu_read(gpu, REG_A5XX_CP_RB_RPTR),
gpu_read(gpu, REG_A5XX_CP_RB_WPTR),
diff --git a/drivers/gpu/drm/msm/adreno/a6xx_gpu.c 
b/drivers/gpu/drm/msm/adreno/a6xx_gpu.c
index 83c31b2ad865..17de46fc4bf2 100644
--- a/drivers/gpu/drm/msm/adreno/a6xx_gpu.c
+++ b/drivers/gpu/drm/msm/adreno/a6xx_gpu.c
@@ -1390,7 +1390,7 @@ static void a6xx_fault_detect_irq(struct msm_gpu *gpu)
 
DRM_DEV_ERROR(&gpu->pdev->dev,
"gpu fault ring %d fence %x status %8.8X rb %4.4x/%4.4x ib1 
%16.16llX/%4.4x ib2 %16.16llX/%4.4x\n",
-   ring ? ring->id : -1, ring ? ring->seqno : 0,
+   ring ? ring->id : -1, ring ? ring->fctx->last_fence : 0,
gpu_read(gpu, REG_A6XX_RBBM_STATUS),
gpu_read(gpu, REG_A6XX_CP_RB_RPTR),
gpu_read(gpu, REG_A6XX_CP_RB_WPTR),
diff --git a/drivers/gpu/drm/msm/adreno/adreno_gpu.c 
b/drivers/gpu/drm/msm/adreno/adreno_gpu.c
index 45f2c6084aa7..6385ab06632f 100644
--- a/drivers/gpu/drm/msm/adreno/adreno_gpu.c
+++ b/drivers/gpu/drm/msm/adreno/adreno_gpu.c
@@ -578,7 +578,7 @@ int adreno_gpu_state_get(struct msm_gpu *gpu, struct 
msm_gpu_state *state)
 
state->ring[i].fence = gpu->rb[i]->memptrs->fence;
state->ring[i].iova = gpu->rb[i]->iova;
-   state->ring[i].seqno = gpu->rb[i]->seqno;
+   state->ring[i].seqno = gpu->rb[i]->fctx->last_fence;
state->ring[i].rptr = get_rptr(adreno_gpu, gpu->rb[i]);
state->ring[i].wptr = get_wptr(gpu->rb[i]);
 
@@ -828,7 +828,7 @@ void adreno_dump_info(struct msm_gpu *gpu)
 
printk("rb %d: fence:%d/%d\n", i,
ring->memptrs->fence,
-   ring->seqno);
+   ring->fctx->last_fence);
 
printk("rptr: %d\n", get_rptr(adreno_gpu, ring));
printk("rb wptr:  %d\n", get_wptr(ring));
diff --git a/drivers/gpu/drm/msm/msm_gpu.c b/drivers/gpu/drm/msm/msm_gpu.c
index 747b89aa9d13..9480bdf875db 100644
--- a/drivers/gpu/drm/msm/msm_gpu.c
+++ b/drivers/gpu/drm/msm/msm_gpu.c
@@ -534,7 +534,7 @@ static void hangcheck_handler(struct timer_list *t)
if (fence != ring->hangcheck_fence) {
/* some progress has been made.. ya! */
ring->hangcheck_fence = fence;
-   } else if (fence_before(fence, ring->seqno)) {
+   } else if (fence_before(fence, ring->fctx->last_fence)) {
/* no progress and not done.. hung! */
ring->hangcheck_fence = fence;
DRM_DEV_ERROR(dev->dev, "%s: hangcheck detected gpu lockup rb 
%d!\n",
@@ -542,13 +542,13 @@ static void hangcheck_handler(struct timer_list *t)
DRM_DEV_ERROR(dev->dev, "%s: completed fence: %u\n",
gpu->name, fence);
DRM_DEV_ERROR(dev->dev, "%s: submitted fence: %u\n",
-   gpu->name, ring->seqno);
+   gpu->name, ring->fctx->last_fence);
 
kthread_queue_work(gpu->worker, &gpu->recover_work);
}
 
/* if still more pending work, reset the hangcheck timer: */
-   if (fence_after(ring->seqno, ring->hangcheck_fence))
+   if (fence_after(ring->fctx->last_fence, ring->hangcheck_fence))
hangcheck_timer_reset(gpu);
 
/* workaround for missing irq: */
@@ -770,7 +770,7 @@ void msm_gpu_submit(struct msm_gpu *gpu, struct 
msm_gem_submit *submit)
 
msm_gpu_hw_init(gpu);
 
-   submit->seqno = ++ring->seqno;
+   submit->seqno = submit->hw_fence->seqno;
 
msm_rd_dump_submit(priv->rd, submit, NULL);
 
diff --git a/drivers/gpu/drm/msm/msm_gpu.h b/drivers/gpu/drm/msm

[Freedreno] [PATCH 3/9] drm/msm/gem: Split out inuse helper

2022-03-29 Thread Rob Clark
From: Rob Clark 

Prep for a following patch.  While we are at it, convert a few remaining
WARN_ON()s to GEM_WARN_ON().

Signed-off-by: Rob Clark 
---
 drivers/gpu/drm/msm/msm_gem.c |  2 +-
 drivers/gpu/drm/msm/msm_gem.h |  1 +
 drivers/gpu/drm/msm/msm_gem_vma.c | 15 ++-
 3 files changed, 12 insertions(+), 6 deletions(-)

diff --git a/drivers/gpu/drm/msm/msm_gem.c b/drivers/gpu/drm/msm/msm_gem.c
index a4f61972667b..f96d1dc72021 100644
--- a/drivers/gpu/drm/msm/msm_gem.c
+++ b/drivers/gpu/drm/msm/msm_gem.c
@@ -938,7 +938,7 @@ void msm_gem_describe(struct drm_gem_object *obj, struct 
seq_file *m,
name, comm ? ":" : "", comm ? comm : "",
vma->aspace, vma->iova,
vma->mapped ? "mapped" : "unmapped",
-   vma->inuse);
+   msm_gem_vma_inuse(vma));
kfree(comm);
}
 
diff --git a/drivers/gpu/drm/msm/msm_gem.h b/drivers/gpu/drm/msm/msm_gem.h
index 947ff7d9b471..1b7f0f0b88bf 100644
--- a/drivers/gpu/drm/msm/msm_gem.h
+++ b/drivers/gpu/drm/msm/msm_gem.h
@@ -61,6 +61,7 @@ struct msm_gem_vma {
 int msm_gem_init_vma(struct msm_gem_address_space *aspace,
struct msm_gem_vma *vma, int npages,
u64 range_start, u64 range_end);
+bool msm_gem_vma_inuse(struct msm_gem_vma *vma);
 void msm_gem_purge_vma(struct msm_gem_address_space *aspace,
struct msm_gem_vma *vma);
 void msm_gem_unmap_vma(struct msm_gem_address_space *aspace,
diff --git a/drivers/gpu/drm/msm/msm_gem_vma.c 
b/drivers/gpu/drm/msm/msm_gem_vma.c
index f914ddbaea89..dc2ae097805e 100644
--- a/drivers/gpu/drm/msm/msm_gem_vma.c
+++ b/drivers/gpu/drm/msm/msm_gem_vma.c
@@ -37,6 +37,11 @@ msm_gem_address_space_get(struct msm_gem_address_space 
*aspace)
return aspace;
 }
 
+bool msm_gem_vma_inuse(struct msm_gem_vma *vma)
+{
+   return !!vma->inuse;
+}
+
 /* Actually unmap memory for the vma */
 void msm_gem_purge_vma(struct msm_gem_address_space *aspace,
struct msm_gem_vma *vma)
@@ -44,7 +49,7 @@ void msm_gem_purge_vma(struct msm_gem_address_space *aspace,
unsigned size = vma->node.size << PAGE_SHIFT;
 
/* Print a message if we try to purge a vma in use */
-   if (WARN_ON(vma->inuse > 0))
+   if (GEM_WARN_ON(msm_gem_vma_inuse(vma)))
return;
 
/* Don't do anything if the memory isn't mapped */
@@ -61,7 +66,7 @@ void msm_gem_purge_vma(struct msm_gem_address_space *aspace,
 void msm_gem_unmap_vma(struct msm_gem_address_space *aspace,
struct msm_gem_vma *vma)
 {
-   if (!WARN_ON(!vma->iova))
+   if (!GEM_WARN_ON(!vma->iova))
vma->inuse--;
 }
 
@@ -73,7 +78,7 @@ msm_gem_map_vma(struct msm_gem_address_space *aspace,
unsigned size = npages << PAGE_SHIFT;
int ret = 0;
 
-   if (WARN_ON(!vma->iova))
+   if (GEM_WARN_ON(!vma->iova))
return -EINVAL;
 
/* Increase the usage counter */
@@ -100,7 +105,7 @@ msm_gem_map_vma(struct msm_gem_address_space *aspace,
 void msm_gem_close_vma(struct msm_gem_address_space *aspace,
struct msm_gem_vma *vma)
 {
-   if (WARN_ON(vma->inuse > 0 || vma->mapped))
+   if (GEM_WARN_ON(msm_gem_vma_inuse(vma) || vma->mapped))
return;
 
spin_lock(&aspace->lock);
@@ -120,7 +125,7 @@ int msm_gem_init_vma(struct msm_gem_address_space *aspace,
 {
int ret;
 
-   if (WARN_ON(vma->iova))
+   if (GEM_WARN_ON(vma->iova))
return -EBUSY;
 
spin_lock(&aspace->lock);
-- 
2.35.1



[Freedreno] [PATCH 1/9] drm/msm/gem: Move prototypes

2022-03-29 Thread Rob Clark
From: Rob Clark 

These belong more cleanly in the gem header.

Signed-off-by: Rob Clark 
---
 drivers/gpu/drm/msm/msm_drv.h | 23 ---
 drivers/gpu/drm/msm/msm_gem.h | 22 ++
 2 files changed, 22 insertions(+), 23 deletions(-)

diff --git a/drivers/gpu/drm/msm/msm_drv.h b/drivers/gpu/drm/msm/msm_drv.h
index 9f68aa685ed7..daf60d219463 100644
--- a/drivers/gpu/drm/msm/msm_drv.h
+++ b/drivers/gpu/drm/msm/msm_drv.h
@@ -250,29 +250,6 @@ void msm_atomic_state_free(struct drm_atomic_state *state);
 int msm_crtc_enable_vblank(struct drm_crtc *crtc);
 void msm_crtc_disable_vblank(struct drm_crtc *crtc);
 
-int msm_gem_init_vma(struct msm_gem_address_space *aspace,
-   struct msm_gem_vma *vma, int npages,
-   u64 range_start, u64 range_end);
-void msm_gem_purge_vma(struct msm_gem_address_space *aspace,
-   struct msm_gem_vma *vma);
-void msm_gem_unmap_vma(struct msm_gem_address_space *aspace,
-   struct msm_gem_vma *vma);
-int msm_gem_map_vma(struct msm_gem_address_space *aspace,
-   struct msm_gem_vma *vma, int prot,
-   struct sg_table *sgt, int npages);
-void msm_gem_close_vma(struct msm_gem_address_space *aspace,
-   struct msm_gem_vma *vma);
-
-
-struct msm_gem_address_space *
-msm_gem_address_space_get(struct msm_gem_address_space *aspace);
-
-void msm_gem_address_space_put(struct msm_gem_address_space *aspace);
-
-struct msm_gem_address_space *
-msm_gem_address_space_create(struct msm_mmu *mmu, const char *name,
-   u64 va_start, u64 size);
-
 int msm_register_mmu(struct drm_device *dev, struct msm_mmu *mmu);
 void msm_unregister_mmu(struct drm_device *dev, struct msm_mmu *mmu);
 
diff --git a/drivers/gpu/drm/msm/msm_gem.h b/drivers/gpu/drm/msm/msm_gem.h
index 58e11c282928..947ff7d9b471 100644
--- a/drivers/gpu/drm/msm/msm_gem.h
+++ b/drivers/gpu/drm/msm/msm_gem.h
@@ -40,6 +40,15 @@ struct msm_gem_address_space {
int faults;
 };
 
+struct msm_gem_address_space *
+msm_gem_address_space_get(struct msm_gem_address_space *aspace);
+
+void msm_gem_address_space_put(struct msm_gem_address_space *aspace);
+
+struct msm_gem_address_space *
+msm_gem_address_space_create(struct msm_mmu *mmu, const char *name,
+   u64 va_start, u64 size);
+
 struct msm_gem_vma {
struct drm_mm_node node;
uint64_t iova;
@@ -49,6 +58,19 @@ struct msm_gem_vma {
int inuse;
 };
 
+int msm_gem_init_vma(struct msm_gem_address_space *aspace,
+   struct msm_gem_vma *vma, int npages,
+   u64 range_start, u64 range_end);
+void msm_gem_purge_vma(struct msm_gem_address_space *aspace,
+   struct msm_gem_vma *vma);
+void msm_gem_unmap_vma(struct msm_gem_address_space *aspace,
+   struct msm_gem_vma *vma);
+int msm_gem_map_vma(struct msm_gem_address_space *aspace,
+   struct msm_gem_vma *vma, int prot,
+   struct sg_table *sgt, int npages);
+void msm_gem_close_vma(struct msm_gem_address_space *aspace,
+   struct msm_gem_vma *vma);
+
 struct msm_gem_object {
struct drm_gem_object base;
 
-- 
2.35.1



[Freedreno] [PATCH 4/9] drm/msm/gem: Drop PAGE_SHIFT for address space mm

2022-03-29 Thread Rob Clark
From: Rob Clark 

Get rid of all the unnecessary conversion between address/size and page
offsets.  It just confuses things.

Signed-off-by: Rob Clark 
---
 drivers/gpu/drm/msm/adreno/a6xx_gmu.c |  2 +-
 drivers/gpu/drm/msm/msm_gem.c |  5 ++---
 drivers/gpu/drm/msm/msm_gem.h |  4 ++--
 drivers/gpu/drm/msm/msm_gem_vma.c | 16 
 4 files changed, 13 insertions(+), 14 deletions(-)

diff --git a/drivers/gpu/drm/msm/adreno/a6xx_gmu.c 
b/drivers/gpu/drm/msm/adreno/a6xx_gmu.c
index 3e325e2a2b1b..9f76f5b15759 100644
--- a/drivers/gpu/drm/msm/adreno/a6xx_gmu.c
+++ b/drivers/gpu/drm/msm/adreno/a6xx_gmu.c
@@ -1172,7 +1172,7 @@ static int a6xx_gmu_memory_alloc(struct a6xx_gmu *gmu, 
struct a6xx_gmu_bo *bo,
return PTR_ERR(bo->obj);
 
ret = msm_gem_get_and_pin_iova_range(bo->obj, gmu->aspace, &bo->iova,
-   range_start >> PAGE_SHIFT, range_end >> PAGE_SHIFT);
+range_start, range_end);
if (ret) {
drm_gem_object_put(bo->obj);
return ret;
diff --git a/drivers/gpu/drm/msm/msm_gem.c b/drivers/gpu/drm/msm/msm_gem.c
index f96d1dc72021..f4b68bb28a4d 100644
--- a/drivers/gpu/drm/msm/msm_gem.c
+++ b/drivers/gpu/drm/msm/msm_gem.c
@@ -392,7 +392,7 @@ static int get_iova_locked(struct drm_gem_object *obj,
if (IS_ERR(vma))
return PTR_ERR(vma);
 
-   ret = msm_gem_init_vma(aspace, vma, obj->size >> PAGE_SHIFT,
+   ret = msm_gem_init_vma(aspace, vma, obj->size,
range_start, range_end);
if (ret) {
del_vma(vma);
@@ -434,8 +434,7 @@ static int msm_gem_pin_iova(struct drm_gem_object *obj,
if (IS_ERR(pages))
return PTR_ERR(pages);
 
-   ret = msm_gem_map_vma(aspace, vma, prot,
-   msm_obj->sgt, obj->size >> PAGE_SHIFT);
+   ret = msm_gem_map_vma(aspace, vma, prot, msm_obj->sgt, obj->size);
 
if (!ret)
msm_obj->pin_count++;
diff --git a/drivers/gpu/drm/msm/msm_gem.h b/drivers/gpu/drm/msm/msm_gem.h
index 1b7f0f0b88bf..090c3b1a6d9a 100644
--- a/drivers/gpu/drm/msm/msm_gem.h
+++ b/drivers/gpu/drm/msm/msm_gem.h
@@ -59,7 +59,7 @@ struct msm_gem_vma {
 };
 
 int msm_gem_init_vma(struct msm_gem_address_space *aspace,
-   struct msm_gem_vma *vma, int npages,
+   struct msm_gem_vma *vma, int size,
u64 range_start, u64 range_end);
 bool msm_gem_vma_inuse(struct msm_gem_vma *vma);
 void msm_gem_purge_vma(struct msm_gem_address_space *aspace,
@@ -68,7 +68,7 @@ void msm_gem_unmap_vma(struct msm_gem_address_space *aspace,
struct msm_gem_vma *vma);
 int msm_gem_map_vma(struct msm_gem_address_space *aspace,
struct msm_gem_vma *vma, int prot,
-   struct sg_table *sgt, int npages);
+   struct sg_table *sgt, int size);
 void msm_gem_close_vma(struct msm_gem_address_space *aspace,
struct msm_gem_vma *vma);
 
diff --git a/drivers/gpu/drm/msm/msm_gem_vma.c 
b/drivers/gpu/drm/msm/msm_gem_vma.c
index dc2ae097805e..4949899f1fc7 100644
--- a/drivers/gpu/drm/msm/msm_gem_vma.c
+++ b/drivers/gpu/drm/msm/msm_gem_vma.c
@@ -46,7 +46,7 @@ bool msm_gem_vma_inuse(struct msm_gem_vma *vma)
 void msm_gem_purge_vma(struct msm_gem_address_space *aspace,
struct msm_gem_vma *vma)
 {
-   unsigned size = vma->node.size << PAGE_SHIFT;
+   unsigned size = vma->node.size;
 
/* Print a message if we try to purge a vma in use */
if (GEM_WARN_ON(msm_gem_vma_inuse(vma)))
@@ -73,9 +73,8 @@ void msm_gem_unmap_vma(struct msm_gem_address_space *aspace,
 int
 msm_gem_map_vma(struct msm_gem_address_space *aspace,
struct msm_gem_vma *vma, int prot,
-   struct sg_table *sgt, int npages)
+   struct sg_table *sgt, int size)
 {
-   unsigned size = npages << PAGE_SHIFT;
int ret = 0;
 
if (GEM_WARN_ON(!vma->iova))
@@ -120,7 +119,7 @@ void msm_gem_close_vma(struct msm_gem_address_space *aspace,
 
 /* Initialize a new vma and allocate an iova for it */
 int msm_gem_init_vma(struct msm_gem_address_space *aspace,
-   struct msm_gem_vma *vma, int npages,
+   struct msm_gem_vma *vma, int size,
u64 range_start, u64 range_end)
 {
int ret;
@@ -129,14 +128,15 @@ int msm_gem_init_vma(struct msm_gem_address_space *aspace,
return -EBUSY;
 
spin_lock(&aspace->lock);
-   ret = drm_mm_insert_node_in_range(&aspace->mm, &vma->node, npages, 0,
-   0, range_start, range_end, 0);
+   ret = drm_mm_insert_node_in_range(&aspace->mm, &vma->node,
+ size, PAGE_SIZE, 0,
+ range_start, range_end, 0);
spin_unlock(&aspace->lock);
 
if (ret)
return ret;
 
-   vma->iova = v

[Freedreno] [PATCH 0/9] drm/msm: Userspace allocated GPU addresses

2022-03-29 Thread Rob Clark
From: Rob Clark 

The first five paches are various cleanups and simplifications.  The
next two get rid of redundant vma lookups in the submit and retire
paths.  Following that, fenced vma lets us indicate a fence value
following which the vma is no longer used, which is needed because
otherwise userspace could observe the signaled fence prior to
retire_submits() finishing.  (With userspace allocated GPU addresses
userspace is tracking when a buffer is no longer used and it's vma can
be deleted.)  And finally the last patch adds the new uabi for user-
space allocated iova.

Rob Clark (9):
  drm/msm/gem: Move prototypes
  drm/msm/gpu: Drop duplicate fence counter
  drm/msm/gem: Split out inuse helper
  drm/msm/gem: Drop PAGE_SHIFT for address space mm
  drm/msm: Drop msm_gem_iova()
  drm/msm/gem: Rework vma lookup and pin
  drm/msm/gem: Split vma lookup and pin
  drm/msm/gem: Add fenced vma unpin
  drm/msm: Add a way for userspace to allocate GPU iova

 drivers/gpu/drm/msm/adreno/a5xx_gpu.c   |   2 +-
 drivers/gpu/drm/msm/adreno/a6xx_gmu.c   |   2 +-
 drivers/gpu/drm/msm/adreno/a6xx_gpu.c   |   2 +-
 drivers/gpu/drm/msm/adreno/adreno_gpu.c |  14 ++-
 drivers/gpu/drm/msm/msm_drv.c   |  21 
 drivers/gpu/drm/msm/msm_drv.h   |  23 
 drivers/gpu/drm/msm/msm_fb.c|  16 ++-
 drivers/gpu/drm/msm/msm_fence.c |   6 +-
 drivers/gpu/drm/msm/msm_fence.h |   3 +
 drivers/gpu/drm/msm/msm_gem.c   | 151 ++--
 drivers/gpu/drm/msm/msm_gem.h   |  47 +++-
 drivers/gpu/drm/msm/msm_gem_submit.c|  17 ++-
 drivers/gpu/drm/msm/msm_gem_vma.c   |  59 ++---
 drivers/gpu/drm/msm/msm_gpu.c   |   8 +-
 drivers/gpu/drm/msm/msm_gpu.h   |   2 +-
 drivers/gpu/drm/msm/msm_ringbuffer.c|  12 +-
 drivers/gpu/drm/msm/msm_ringbuffer.h|   1 -
 include/uapi/drm/msm_drm.h  |   3 +
 18 files changed, 258 insertions(+), 131 deletions(-)

-- 
2.35.1



[Freedreno] [PATCH 5/9] drm/msm: Drop msm_gem_iova()

2022-03-29 Thread Rob Clark
From: Rob Clark 

There was only a single user, which could just as easily stash the iova
when pinning.

Signed-off-by: Rob Clark 
---
 drivers/gpu/drm/msm/msm_fb.c  | 16 ++--
 drivers/gpu/drm/msm/msm_gem.c | 16 
 drivers/gpu/drm/msm/msm_gem.h |  2 --
 3 files changed, 10 insertions(+), 24 deletions(-)

diff --git a/drivers/gpu/drm/msm/msm_fb.c b/drivers/gpu/drm/msm/msm_fb.c
index 7137492fe78e..d4eef66e29dc 100644
--- a/drivers/gpu/drm/msm/msm_fb.c
+++ b/drivers/gpu/drm/msm/msm_fb.c
@@ -21,6 +21,9 @@ struct msm_framebuffer {
 
/* Count of # of attached planes which need dirtyfb: */
refcount_t dirtyfb;
+
+   /* Framebuffer per-plane address, if pinned, else zero: */
+   uint64_t iova[DRM_FORMAT_MAX_PLANES];
 };
 #define to_msm_framebuffer(x) container_of(x, struct msm_framebuffer, base)
 
@@ -76,14 +79,14 @@ int msm_framebuffer_prepare(struct drm_framebuffer *fb,
 {
struct msm_framebuffer *msm_fb = to_msm_framebuffer(fb);
int ret, i, n = fb->format->num_planes;
-   uint64_t iova;
 
if (needs_dirtyfb)
refcount_inc(&msm_fb->dirtyfb);
 
for (i = 0; i < n; i++) {
-   ret = msm_gem_get_and_pin_iova(fb->obj[i], aspace, &iova);
-   drm_dbg_state(fb->dev, "FB[%u]: iova[%d]: %08llx (%d)", 
fb->base.id, i, iova, ret);
+   ret = msm_gem_get_and_pin_iova(fb->obj[i], aspace, 
&msm_fb->iova[i]);
+   drm_dbg_state(fb->dev, "FB[%u]: iova[%d]: %08llx (%d)",
+ fb->base.id, i, msm_fb->iova[i], ret);
if (ret)
return ret;
}
@@ -103,14 +106,15 @@ void msm_framebuffer_cleanup(struct drm_framebuffer *fb,
 
for (i = 0; i < n; i++)
msm_gem_unpin_iova(fb->obj[i], aspace);
+
+   memset(msm_fb->iova, 0, sizeof(msm_fb->iova));
 }
 
 uint32_t msm_framebuffer_iova(struct drm_framebuffer *fb,
struct msm_gem_address_space *aspace, int plane)
 {
-   if (!fb->obj[plane])
-   return 0;
-   return msm_gem_iova(fb->obj[plane], aspace) + fb->offsets[plane];
+   struct msm_framebuffer *msm_fb = to_msm_framebuffer(fb);
+   return msm_fb->iova[plane];
 }
 
 struct drm_gem_object *msm_framebuffer_bo(struct drm_framebuffer *fb, int 
plane)
diff --git a/drivers/gpu/drm/msm/msm_gem.c b/drivers/gpu/drm/msm/msm_gem.c
index f4b68bb28a4d..deafae6feaa8 100644
--- a/drivers/gpu/drm/msm/msm_gem.c
+++ b/drivers/gpu/drm/msm/msm_gem.c
@@ -509,22 +509,6 @@ int msm_gem_get_iova(struct drm_gem_object *obj,
return ret;
 }
 
-/* get iova without taking a reference, used in places where you have
- * already done a 'msm_gem_get_and_pin_iova' or 'msm_gem_get_iova'
- */
-uint64_t msm_gem_iova(struct drm_gem_object *obj,
-   struct msm_gem_address_space *aspace)
-{
-   struct msm_gem_vma *vma;
-
-   msm_gem_lock(obj);
-   vma = lookup_vma(obj, aspace);
-   msm_gem_unlock(obj);
-   GEM_WARN_ON(!vma);
-
-   return vma ? vma->iova : 0;
-}
-
 /*
  * Locked variant of msm_gem_unpin_iova()
  */
diff --git a/drivers/gpu/drm/msm/msm_gem.h b/drivers/gpu/drm/msm/msm_gem.h
index 090c3b1a6d9a..772de010a669 100644
--- a/drivers/gpu/drm/msm/msm_gem.h
+++ b/drivers/gpu/drm/msm/msm_gem.h
@@ -142,8 +142,6 @@ int msm_gem_get_and_pin_iova_locked(struct drm_gem_object 
*obj,
struct msm_gem_address_space *aspace, uint64_t *iova);
 int msm_gem_get_and_pin_iova(struct drm_gem_object *obj,
struct msm_gem_address_space *aspace, uint64_t *iova);
-uint64_t msm_gem_iova(struct drm_gem_object *obj,
-   struct msm_gem_address_space *aspace);
 void msm_gem_unpin_iova_locked(struct drm_gem_object *obj,
struct msm_gem_address_space *aspace);
 void msm_gem_unpin_iova(struct drm_gem_object *obj,
-- 
2.35.1



[Freedreno] [PATCH 6/9] drm/msm/gem: Rework vma lookup and pin

2022-03-29 Thread Rob Clark
From: Rob Clark 

Combines duplicate vma lookup in the get_and_pin path.

Signed-off-by: Rob Clark 
---
 drivers/gpu/drm/msm/msm_gem.c | 50 ++-
 1 file changed, 26 insertions(+), 24 deletions(-)

diff --git a/drivers/gpu/drm/msm/msm_gem.c b/drivers/gpu/drm/msm/msm_gem.c
index deafae6feaa8..218744a490a4 100644
--- a/drivers/gpu/drm/msm/msm_gem.c
+++ b/drivers/gpu/drm/msm/msm_gem.c
@@ -376,39 +376,40 @@ put_iova_vmas(struct drm_gem_object *obj)
}
 }
 
-static int get_iova_locked(struct drm_gem_object *obj,
-   struct msm_gem_address_space *aspace, uint64_t *iova,
+static struct msm_gem_vma *get_vma_locked(struct drm_gem_object *obj,
+   struct msm_gem_address_space *aspace,
u64 range_start, u64 range_end)
 {
struct msm_gem_vma *vma;
-   int ret = 0;
 
GEM_WARN_ON(!msm_gem_is_locked(obj));
 
vma = lookup_vma(obj, aspace);
 
if (!vma) {
+   int ret;
+
vma = add_vma(obj, aspace);
if (IS_ERR(vma))
-   return PTR_ERR(vma);
+   return vma;
 
ret = msm_gem_init_vma(aspace, vma, obj->size,
range_start, range_end);
if (ret) {
del_vma(vma);
-   return ret;
+   return ERR_PTR(ret);
}
+   } else {
+   GEM_WARN_ON(vma->iova < range_start);
+   GEM_WARN_ON((vma->iova + obj->size) > range_end);
}
 
-   *iova = vma->iova;
-   return 0;
+   return vma;
 }
 
-static int msm_gem_pin_iova(struct drm_gem_object *obj,
-   struct msm_gem_address_space *aspace)
+static int msm_gem_pin_iova(struct drm_gem_object *obj, struct msm_gem_vma 
*vma)
 {
struct msm_gem_object *msm_obj = to_msm_bo(obj);
-   struct msm_gem_vma *vma;
struct page **pages;
int ret, prot = IOMMU_READ;
 
@@ -426,15 +427,11 @@ static int msm_gem_pin_iova(struct drm_gem_object *obj,
if (GEM_WARN_ON(msm_obj->madv != MSM_MADV_WILLNEED))
return -EBUSY;
 
-   vma = lookup_vma(obj, aspace);
-   if (GEM_WARN_ON(!vma))
-   return -EINVAL;
-
pages = get_pages(obj);
if (IS_ERR(pages))
return PTR_ERR(pages);
 
-   ret = msm_gem_map_vma(aspace, vma, prot, msm_obj->sgt, obj->size);
+   ret = msm_gem_map_vma(vma->aspace, vma, prot, msm_obj->sgt, obj->size);
 
if (!ret)
msm_obj->pin_count++;
@@ -446,19 +443,18 @@ static int get_and_pin_iova_range_locked(struct 
drm_gem_object *obj,
struct msm_gem_address_space *aspace, uint64_t *iova,
u64 range_start, u64 range_end)
 {
-   u64 local;
+   struct msm_gem_vma *vma;
int ret;
 
GEM_WARN_ON(!msm_gem_is_locked(obj));
 
-   ret = get_iova_locked(obj, aspace, &local,
-   range_start, range_end);
-
-   if (!ret)
-   ret = msm_gem_pin_iova(obj, aspace);
+   vma = get_vma_locked(obj, aspace, range_start, range_end);
+   if (IS_ERR(vma))
+   return PTR_ERR(vma);
 
+   ret = msm_gem_pin_iova(obj, vma);
if (!ret)
-   *iova = local;
+   *iova = vma->iova;
 
return ret;
 }
@@ -500,10 +496,16 @@ int msm_gem_get_and_pin_iova(struct drm_gem_object *obj,
 int msm_gem_get_iova(struct drm_gem_object *obj,
struct msm_gem_address_space *aspace, uint64_t *iova)
 {
-   int ret;
+   struct msm_gem_vma *vma;
+   int ret = 0;
 
msm_gem_lock(obj);
-   ret = get_iova_locked(obj, aspace, iova, 0, U64_MAX);
+   vma = get_vma_locked(obj, aspace, 0, U64_MAX);
+   if (IS_ERR(vma)) {
+   ret = PTR_ERR(vma);
+   } else {
+   *iova = vma->iova;
+   }
msm_gem_unlock(obj);
 
return ret;
-- 
2.35.1



[Freedreno] [PATCH 7/9] drm/msm/gem: Split vma lookup and pin

2022-03-29 Thread Rob Clark
From: Rob Clark 

This way we only lookup vma once per object per submit, for both the
submit and retire path.

Signed-off-by: Rob Clark 
---
 drivers/gpu/drm/msm/msm_gem.c| 60 +---
 drivers/gpu/drm/msm/msm_gem.h|  9 +++--
 drivers/gpu/drm/msm/msm_gem_submit.c | 17 +---
 3 files changed, 44 insertions(+), 42 deletions(-)

diff --git a/drivers/gpu/drm/msm/msm_gem.c b/drivers/gpu/drm/msm/msm_gem.c
index 218744a490a4..e8107a22c33a 100644
--- a/drivers/gpu/drm/msm/msm_gem.c
+++ b/drivers/gpu/drm/msm/msm_gem.c
@@ -407,7 +407,7 @@ static struct msm_gem_vma *get_vma_locked(struct 
drm_gem_object *obj,
return vma;
 }
 
-static int msm_gem_pin_iova(struct drm_gem_object *obj, struct msm_gem_vma 
*vma)
+int msm_gem_pin_vma_locked(struct drm_gem_object *obj, struct msm_gem_vma *vma)
 {
struct msm_gem_object *msm_obj = to_msm_bo(obj);
struct page **pages;
@@ -439,6 +439,26 @@ static int msm_gem_pin_iova(struct drm_gem_object *obj, 
struct msm_gem_vma *vma)
return ret;
 }
 
+void msm_gem_unpin_vma_locked(struct drm_gem_object *obj, struct msm_gem_vma 
*vma)
+{
+   struct msm_gem_object *msm_obj = to_msm_bo(obj);
+
+   GEM_WARN_ON(!msm_gem_is_locked(obj));
+
+   msm_gem_unmap_vma(vma->aspace, vma);
+
+   msm_obj->pin_count--;
+   GEM_WARN_ON(msm_obj->pin_count < 0);
+
+   update_inactive(msm_obj);
+}
+
+struct msm_gem_vma *msm_gem_get_vma_locked(struct drm_gem_object *obj,
+  struct msm_gem_address_space *aspace)
+{
+   return get_vma_locked(obj, aspace, 0, U64_MAX);
+}
+
 static int get_and_pin_iova_range_locked(struct drm_gem_object *obj,
struct msm_gem_address_space *aspace, uint64_t *iova,
u64 range_start, u64 range_end)
@@ -452,7 +472,7 @@ static int get_and_pin_iova_range_locked(struct 
drm_gem_object *obj,
if (IS_ERR(vma))
return PTR_ERR(vma);
 
-   ret = msm_gem_pin_iova(obj, vma);
+   ret = msm_gem_pin_vma_locked(obj, vma);
if (!ret)
*iova = vma->iova;
 
@@ -476,12 +496,6 @@ int msm_gem_get_and_pin_iova_range(struct drm_gem_object 
*obj,
return ret;
 }
 
-int msm_gem_get_and_pin_iova_locked(struct drm_gem_object *obj,
-   struct msm_gem_address_space *aspace, uint64_t *iova)
-{
-   return get_and_pin_iova_range_locked(obj, aspace, iova, 0, U64_MAX);
-}
-
 /* get iova and pin it. Should have a matching put */
 int msm_gem_get_and_pin_iova(struct drm_gem_object *obj,
struct msm_gem_address_space *aspace, uint64_t *iova)
@@ -511,29 +525,6 @@ int msm_gem_get_iova(struct drm_gem_object *obj,
return ret;
 }
 
-/*
- * Locked variant of msm_gem_unpin_iova()
- */
-void msm_gem_unpin_iova_locked(struct drm_gem_object *obj,
-   struct msm_gem_address_space *aspace)
-{
-   struct msm_gem_object *msm_obj = to_msm_bo(obj);
-   struct msm_gem_vma *vma;
-
-   GEM_WARN_ON(!msm_gem_is_locked(obj));
-
-   vma = lookup_vma(obj, aspace);
-
-   if (!GEM_WARN_ON(!vma)) {
-   msm_gem_unmap_vma(aspace, vma);
-
-   msm_obj->pin_count--;
-   GEM_WARN_ON(msm_obj->pin_count < 0);
-
-   update_inactive(msm_obj);
-   }
-}
-
 /*
  * Unpin a iova by updating the reference counts. The memory isn't actually
  * purged until something else (shrinker, mm_notifier, destroy, etc) decides
@@ -542,8 +533,13 @@ void msm_gem_unpin_iova_locked(struct drm_gem_object *obj,
 void msm_gem_unpin_iova(struct drm_gem_object *obj,
struct msm_gem_address_space *aspace)
 {
+   struct msm_gem_vma *vma;
+
msm_gem_lock(obj);
-   msm_gem_unpin_iova_locked(obj, aspace);
+   vma = lookup_vma(obj, aspace);
+   if (!GEM_WARN_ON(!vma)) {
+   msm_gem_unpin_vma_locked(obj, vma);
+   }
msm_gem_unlock(obj);
 }
 
diff --git a/drivers/gpu/drm/msm/msm_gem.h b/drivers/gpu/drm/msm/msm_gem.h
index 772de010a669..f98264cf130d 100644
--- a/drivers/gpu/drm/msm/msm_gem.h
+++ b/drivers/gpu/drm/msm/msm_gem.h
@@ -133,17 +133,17 @@ struct msm_gem_object {
 #define to_msm_bo(x) container_of(x, struct msm_gem_object, base)
 
 uint64_t msm_gem_mmap_offset(struct drm_gem_object *obj);
+int msm_gem_pin_vma_locked(struct drm_gem_object *obj, struct msm_gem_vma 
*vma);
+void msm_gem_unpin_vma_locked(struct drm_gem_object *obj, struct msm_gem_vma 
*vma);
+struct msm_gem_vma *msm_gem_get_vma_locked(struct drm_gem_object *obj,
+  struct msm_gem_address_space 
*aspace);
 int msm_gem_get_iova(struct drm_gem_object *obj,
struct msm_gem_address_space *aspace, uint64_t *iova);
 int msm_gem_get_and_pin_iova_range(struct drm_gem_object *obj,
struct msm_gem_address_space *aspace, uint64_t *iova,
u64 range_start, u64 range_end);
-int msm_gem_get_and_pin_iova_locked(struct drm_gem_object *obj,
-  

[Freedreno] [PATCH 8/9] drm/msm/gem: Add fenced vma unpin

2022-03-29 Thread Rob Clark
From: Rob Clark 

With userspace allocated iova (next patch), we can have a race condition
where userspace observes the fence completion and deletes the vma before
retire_submit() gets around to unpinning the vma.  To handle this, add a
fenced unpin which drops the refcount but tracks the fence, and update
msm_gem_vma_inuse() to check any previously unsignaled fences.

Signed-off-by: Rob Clark 
---
 drivers/gpu/drm/msm/msm_fence.c  |  6 --
 drivers/gpu/drm/msm/msm_fence.h  |  3 +++
 drivers/gpu/drm/msm/msm_gem.c|  2 +-
 drivers/gpu/drm/msm/msm_gem.h|  9 +++--
 drivers/gpu/drm/msm/msm_gem_vma.c| 28 +---
 drivers/gpu/drm/msm/msm_ringbuffer.c | 12 +++-
 6 files changed, 51 insertions(+), 9 deletions(-)

diff --git a/drivers/gpu/drm/msm/msm_fence.c b/drivers/gpu/drm/msm/msm_fence.c
index f2cece542c3f..3df255402a33 100644
--- a/drivers/gpu/drm/msm/msm_fence.c
+++ b/drivers/gpu/drm/msm/msm_fence.c
@@ -15,6 +15,7 @@ msm_fence_context_alloc(struct drm_device *dev, volatile 
uint32_t *fenceptr,
const char *name)
 {
struct msm_fence_context *fctx;
+   static int index = 0;
 
fctx = kzalloc(sizeof(*fctx), GFP_KERNEL);
if (!fctx)
@@ -23,6 +24,7 @@ msm_fence_context_alloc(struct drm_device *dev, volatile 
uint32_t *fenceptr,
fctx->dev = dev;
strncpy(fctx->name, name, sizeof(fctx->name));
fctx->context = dma_fence_context_alloc(1);
+   fctx->index = index++;
fctx->fenceptr = fenceptr;
spin_lock_init(&fctx->spinlock);
 
@@ -34,7 +36,7 @@ void msm_fence_context_free(struct msm_fence_context *fctx)
kfree(fctx);
 }
 
-static inline bool fence_completed(struct msm_fence_context *fctx, uint32_t 
fence)
+bool msm_fence_completed(struct msm_fence_context *fctx, uint32_t fence)
 {
/*
 * Note: Check completed_fence first, as fenceptr is in a write-combine
@@ -76,7 +78,7 @@ static const char *msm_fence_get_timeline_name(struct 
dma_fence *fence)
 static bool msm_fence_signaled(struct dma_fence *fence)
 {
struct msm_fence *f = to_msm_fence(fence);
-   return fence_completed(f->fctx, f->base.seqno);
+   return msm_fence_completed(f->fctx, f->base.seqno);
 }
 
 static const struct dma_fence_ops msm_fence_ops = {
diff --git a/drivers/gpu/drm/msm/msm_fence.h b/drivers/gpu/drm/msm/msm_fence.h
index 17ee3822b423..7f1798c54cd1 100644
--- a/drivers/gpu/drm/msm/msm_fence.h
+++ b/drivers/gpu/drm/msm/msm_fence.h
@@ -21,6 +21,8 @@ struct msm_fence_context {
char name[32];
/** context: see dma_fence_context_alloc() */
unsigned context;
+   /** index: similar to context, but local to msm_fence_context's */
+   unsigned index;
 
/**
 * last_fence:
@@ -56,6 +58,7 @@ struct msm_fence_context * msm_fence_context_alloc(struct 
drm_device *dev,
volatile uint32_t *fenceptr, const char *name);
 void msm_fence_context_free(struct msm_fence_context *fctx);
 
+bool msm_fence_completed(struct msm_fence_context *fctx, uint32_t fence);
 void msm_update_fence(struct msm_fence_context *fctx, uint32_t fence);
 
 struct dma_fence * msm_fence_alloc(struct msm_fence_context *fctx);
diff --git a/drivers/gpu/drm/msm/msm_gem.c b/drivers/gpu/drm/msm/msm_gem.c
index e8107a22c33a..bf4af17e2f1e 100644
--- a/drivers/gpu/drm/msm/msm_gem.c
+++ b/drivers/gpu/drm/msm/msm_gem.c
@@ -445,7 +445,7 @@ void msm_gem_unpin_vma_locked(struct drm_gem_object *obj, 
struct msm_gem_vma *vm
 
GEM_WARN_ON(!msm_gem_is_locked(obj));
 
-   msm_gem_unmap_vma(vma->aspace, vma);
+   msm_gem_unpin_vma(vma);
 
msm_obj->pin_count--;
GEM_WARN_ON(msm_obj->pin_count < 0);
diff --git a/drivers/gpu/drm/msm/msm_gem.h b/drivers/gpu/drm/msm/msm_gem.h
index f98264cf130d..38d66e1248b1 100644
--- a/drivers/gpu/drm/msm/msm_gem.h
+++ b/drivers/gpu/drm/msm/msm_gem.h
@@ -49,6 +49,8 @@ struct msm_gem_address_space *
 msm_gem_address_space_create(struct msm_mmu *mmu, const char *name,
u64 va_start, u64 size);
 
+struct msm_fence_context;
+
 struct msm_gem_vma {
struct drm_mm_node node;
uint64_t iova;
@@ -56,6 +58,9 @@ struct msm_gem_vma {
struct list_head list;/* node in msm_gem_object::vmas */
bool mapped;
int inuse;
+   uint32_t fence_mask;
+   uint32_t fence[MSM_GPU_MAX_RINGS];
+   struct msm_fence_context *fctx[MSM_GPU_MAX_RINGS];
 };
 
 int msm_gem_init_vma(struct msm_gem_address_space *aspace,
@@ -64,8 +69,8 @@ int msm_gem_init_vma(struct msm_gem_address_space *aspace,
 bool msm_gem_vma_inuse(struct msm_gem_vma *vma);
 void msm_gem_purge_vma(struct msm_gem_address_space *aspace,
struct msm_gem_vma *vma);
-void msm_gem_unmap_vma(struct msm_gem_address_space *aspace,
-   struct msm_gem_vma *vma);
+void msm_gem_unpin_vma(struct msm_gem_vma *vma);
+void msm_gem_unpin_vma_fenced(struct msm_gem_vma *vma, struct 
msm_fence_context *f

[Freedreno] [PATCH 9/9] drm/msm: Add a way for userspace to allocate GPU iova

2022-03-29 Thread Rob Clark
From: Rob Clark 

The motivation at this point is mainly native userspace mesa driver in a
VM guest.  The one remaining synchronous "hotpath" is buffer allocation,
because guest needs to wait to know the bo's iova before it can start
emitting cmdstream/state that references the new bo.  By allocating the
iova in the guest userspace, we no longer need to wait for a response
from the host, but can just rely on the allocation request being
processed before the cmdstream submission.  Allocation faulures (OoM,
etc) would just be treated as context-lost (ie. GL_GUILTY_CONTEXT_RESET)
or subsequent allocations (or readpix, etc) can raise GL_OUT_OF_MEMORY.

Signed-off-by: Rob Clark 
---
 drivers/gpu/drm/msm/adreno/adreno_gpu.c | 10 ++
 drivers/gpu/drm/msm/msm_drv.c   | 21 +++
 drivers/gpu/drm/msm/msm_gem.c   | 48 +
 drivers/gpu/drm/msm/msm_gem.h   |  8 +
 drivers/gpu/drm/msm/msm_gem_vma.c   |  2 ++
 include/uapi/drm/msm_drm.h  |  3 ++
 6 files changed, 92 insertions(+)

diff --git a/drivers/gpu/drm/msm/adreno/adreno_gpu.c 
b/drivers/gpu/drm/msm/adreno/adreno_gpu.c
index 6385ab06632f..4caae0229518 100644
--- a/drivers/gpu/drm/msm/adreno/adreno_gpu.c
+++ b/drivers/gpu/drm/msm/adreno/adreno_gpu.c
@@ -281,6 +281,16 @@ int adreno_get_param(struct msm_gpu *gpu, struct 
msm_file_private *ctx,
case MSM_PARAM_SUSPENDS:
*value = gpu->suspend_count;
return 0;
+   case MSM_PARAM_VA_START:
+   if (ctx->aspace == gpu->aspace)
+   return -EINVAL;
+   *value = ctx->aspace->va_start;
+   return 0;
+   case MSM_PARAM_VA_SIZE:
+   if (ctx->aspace == gpu->aspace)
+   return -EINVAL;
+   *value = ctx->aspace->va_size;
+   return 0;
default:
DBG("%s: invalid param: %u", gpu->name, param);
return -EINVAL;
diff --git a/drivers/gpu/drm/msm/msm_drv.c b/drivers/gpu/drm/msm/msm_drv.c
index a5eed5738ac8..7394312cf075 100644
--- a/drivers/gpu/drm/msm/msm_drv.c
+++ b/drivers/gpu/drm/msm/msm_drv.c
@@ -719,6 +719,23 @@ static int msm_ioctl_gem_info_iova(struct drm_device *dev,
return msm_gem_get_iova(obj, ctx->aspace, iova);
 }
 
+static int msm_ioctl_gem_info_set_iova(struct drm_device *dev,
+   struct drm_file *file, struct drm_gem_object *obj,
+   uint64_t iova)
+{
+   struct msm_drm_private *priv = dev->dev_private;
+   struct msm_file_private *ctx = file->driver_priv;
+
+   if (!priv->gpu)
+   return -EINVAL;
+
+   /* Only supported if per-process address space is supported: */
+   if (priv->gpu->aspace == ctx->aspace)
+   return -EINVAL;
+
+   return msm_gem_set_iova(obj, ctx->aspace, iova);
+}
+
 static int msm_ioctl_gem_info(struct drm_device *dev, void *data,
struct drm_file *file)
 {
@@ -733,6 +750,7 @@ static int msm_ioctl_gem_info(struct drm_device *dev, void 
*data,
switch (args->info) {
case MSM_INFO_GET_OFFSET:
case MSM_INFO_GET_IOVA:
+   case MSM_INFO_SET_IOVA:
/* value returned as immediate, not pointer, so len==0: */
if (args->len)
return -EINVAL;
@@ -757,6 +775,9 @@ static int msm_ioctl_gem_info(struct drm_device *dev, void 
*data,
case MSM_INFO_GET_IOVA:
ret = msm_ioctl_gem_info_iova(dev, file, obj, &args->value);
break;
+   case MSM_INFO_SET_IOVA:
+   ret = msm_ioctl_gem_info_set_iova(dev, file, obj, args->value);
+   break;
case MSM_INFO_SET_NAME:
/* length check should leave room for terminating null: */
if (args->len >= sizeof(msm_obj->name)) {
diff --git a/drivers/gpu/drm/msm/msm_gem.c b/drivers/gpu/drm/msm/msm_gem.c
index bf4af17e2f1e..3122ba308f31 100644
--- a/drivers/gpu/drm/msm/msm_gem.c
+++ b/drivers/gpu/drm/msm/msm_gem.c
@@ -525,6 +525,54 @@ int msm_gem_get_iova(struct drm_gem_object *obj,
return ret;
 }
 
+static int clear_iova(struct drm_gem_object *obj,
+ struct msm_gem_address_space *aspace)
+{
+   struct msm_gem_vma *vma = lookup_vma(obj, aspace);
+
+   if (!vma)
+   return 0;
+
+   if (vma->inuse)
+   return -EBUSY;
+
+   msm_gem_purge_vma(vma->aspace, vma);
+   msm_gem_close_vma(vma->aspace, vma);
+   del_vma(vma);
+
+   return 0;
+}
+
+/*
+ * Get the requested iova but don't pin it.  Fails if the requested iova is
+ * not available.  Doesn't need a put because iovas are currently valid for
+ * the life of the object.
+ *
+ * Setting an iova of zero will clear the vma.
+ */
+int msm_gem_set_iova(struct drm_gem_object *obj,
+struct msm_gem_address_space *aspace, uint64_t iova)
+{
+   int ret = 0;
+
+   msm_gem_lock(obj);
+   if (!iova) {
+ 

Re: [Freedreno] [PATCH 1/9] drm/msm/gem: Move prototypes

2022-03-29 Thread Dmitry Baryshkov
On Wed, 30 Mar 2022 at 02:00, Rob Clark  wrote:
>
> From: Rob Clark 
>
> These belong more cleanly in the gem header.
>
> Signed-off-by: Rob Clark 

Reviewed-by: Dmitry Baryshkov 



-- 
With best wishes
Dmitry


Re: [Freedreno] [PATCH 4/9] drm/msm/gem: Drop PAGE_SHIFT for address space mm

2022-03-29 Thread Dmitry Baryshkov
On Wed, 30 Mar 2022 at 02:00, Rob Clark  wrote:
>
> From: Rob Clark 
>
> Get rid of all the unnecessary conversion between address/size and page
> offsets.  It just confuses things.

Reviewed-by: Dmitry Baryshkov 



>
> Signed-off-by: Rob Clark 
> ---
>  drivers/gpu/drm/msm/adreno/a6xx_gmu.c |  2 +-
>  drivers/gpu/drm/msm/msm_gem.c |  5 ++---
>  drivers/gpu/drm/msm/msm_gem.h |  4 ++--
>  drivers/gpu/drm/msm/msm_gem_vma.c | 16 
>  4 files changed, 13 insertions(+), 14 deletions(-)
>
> diff --git a/drivers/gpu/drm/msm/adreno/a6xx_gmu.c 
> b/drivers/gpu/drm/msm/adreno/a6xx_gmu.c
> index 3e325e2a2b1b..9f76f5b15759 100644
> --- a/drivers/gpu/drm/msm/adreno/a6xx_gmu.c
> +++ b/drivers/gpu/drm/msm/adreno/a6xx_gmu.c
> @@ -1172,7 +1172,7 @@ static int a6xx_gmu_memory_alloc(struct a6xx_gmu *gmu, 
> struct a6xx_gmu_bo *bo,
> return PTR_ERR(bo->obj);
>
> ret = msm_gem_get_and_pin_iova_range(bo->obj, gmu->aspace, &bo->iova,
> -   range_start >> PAGE_SHIFT, range_end >> PAGE_SHIFT);
> +range_start, range_end);
> if (ret) {
> drm_gem_object_put(bo->obj);
> return ret;
> diff --git a/drivers/gpu/drm/msm/msm_gem.c b/drivers/gpu/drm/msm/msm_gem.c
> index f96d1dc72021..f4b68bb28a4d 100644
> --- a/drivers/gpu/drm/msm/msm_gem.c
> +++ b/drivers/gpu/drm/msm/msm_gem.c
> @@ -392,7 +392,7 @@ static int get_iova_locked(struct drm_gem_object *obj,
> if (IS_ERR(vma))
> return PTR_ERR(vma);
>
> -   ret = msm_gem_init_vma(aspace, vma, obj->size >> PAGE_SHIFT,
> +   ret = msm_gem_init_vma(aspace, vma, obj->size,
> range_start, range_end);
> if (ret) {
> del_vma(vma);
> @@ -434,8 +434,7 @@ static int msm_gem_pin_iova(struct drm_gem_object *obj,
> if (IS_ERR(pages))
> return PTR_ERR(pages);
>
> -   ret = msm_gem_map_vma(aspace, vma, prot,
> -   msm_obj->sgt, obj->size >> PAGE_SHIFT);
> +   ret = msm_gem_map_vma(aspace, vma, prot, msm_obj->sgt, obj->size);
>
> if (!ret)
> msm_obj->pin_count++;
> diff --git a/drivers/gpu/drm/msm/msm_gem.h b/drivers/gpu/drm/msm/msm_gem.h
> index 1b7f0f0b88bf..090c3b1a6d9a 100644
> --- a/drivers/gpu/drm/msm/msm_gem.h
> +++ b/drivers/gpu/drm/msm/msm_gem.h
> @@ -59,7 +59,7 @@ struct msm_gem_vma {
>  };
>
>  int msm_gem_init_vma(struct msm_gem_address_space *aspace,
> -   struct msm_gem_vma *vma, int npages,
> +   struct msm_gem_vma *vma, int size,
> u64 range_start, u64 range_end);
>  bool msm_gem_vma_inuse(struct msm_gem_vma *vma);
>  void msm_gem_purge_vma(struct msm_gem_address_space *aspace,
> @@ -68,7 +68,7 @@ void msm_gem_unmap_vma(struct msm_gem_address_space *aspace,
> struct msm_gem_vma *vma);
>  int msm_gem_map_vma(struct msm_gem_address_space *aspace,
> struct msm_gem_vma *vma, int prot,
> -   struct sg_table *sgt, int npages);
> +   struct sg_table *sgt, int size);
>  void msm_gem_close_vma(struct msm_gem_address_space *aspace,
> struct msm_gem_vma *vma);
>
> diff --git a/drivers/gpu/drm/msm/msm_gem_vma.c 
> b/drivers/gpu/drm/msm/msm_gem_vma.c
> index dc2ae097805e..4949899f1fc7 100644
> --- a/drivers/gpu/drm/msm/msm_gem_vma.c
> +++ b/drivers/gpu/drm/msm/msm_gem_vma.c
> @@ -46,7 +46,7 @@ bool msm_gem_vma_inuse(struct msm_gem_vma *vma)
>  void msm_gem_purge_vma(struct msm_gem_address_space *aspace,
> struct msm_gem_vma *vma)
>  {
> -   unsigned size = vma->node.size << PAGE_SHIFT;
> +   unsigned size = vma->node.size;
>
> /* Print a message if we try to purge a vma in use */
> if (GEM_WARN_ON(msm_gem_vma_inuse(vma)))
> @@ -73,9 +73,8 @@ void msm_gem_unmap_vma(struct msm_gem_address_space *aspace,
>  int
>  msm_gem_map_vma(struct msm_gem_address_space *aspace,
> struct msm_gem_vma *vma, int prot,
> -   struct sg_table *sgt, int npages)
> +   struct sg_table *sgt, int size)
>  {
> -   unsigned size = npages << PAGE_SHIFT;
> int ret = 0;
>
> if (GEM_WARN_ON(!vma->iova))
> @@ -120,7 +119,7 @@ void msm_gem_close_vma(struct msm_gem_address_space 
> *aspace,
>
>  /* Initialize a new vma and allocate an iova for it */
>  int msm_gem_init_vma(struct msm_gem_address_space *aspace,
> -   struct msm_gem_vma *vma, int npages,
> +   struct msm_gem_vma *vma, int size,
> u64 range_start, u64 range_end)
>  {
> int ret;
> @@ -129,14 +128,15 @@ int msm_gem_init_vma(struct msm_gem_address_space 
> *aspace,
> return -EBUSY;
>
> spin_lock(&aspace->lock);
> -   ret = drm_mm_insert_node_in_range(&aspace->mm, &vma->node, npages, 0,
> -   0, range_start, range_end, 0);
> +   r

Re: [Freedreno] [PATCH 9/9] drm/msm: Add a way for userspace to allocate GPU iova

2022-03-29 Thread Dmitry Baryshkov
On Wed, 30 Mar 2022 at 02:00, Rob Clark  wrote:
>
> From: Rob Clark 
>
> The motivation at this point is mainly native userspace mesa driver in a
> VM guest.  The one remaining synchronous "hotpath" is buffer allocation,
> because guest needs to wait to know the bo's iova before it can start
> emitting cmdstream/state that references the new bo.  By allocating the
> iova in the guest userspace, we no longer need to wait for a response
> from the host, but can just rely on the allocation request being
> processed before the cmdstream submission.  Allocation faulures (OoM,

failures

> etc) would just be treated as context-lost (ie. GL_GUILTY_CONTEXT_RESET)
> or subsequent allocations (or readpix, etc) can raise GL_OUT_OF_MEMORY.
>
> Signed-off-by: Rob Clark 

Reviewed-by: Dmitry Baryshkov 

Minor nits (above and below).

> ---
>  drivers/gpu/drm/msm/adreno/adreno_gpu.c | 10 ++
>  drivers/gpu/drm/msm/msm_drv.c   | 21 +++
>  drivers/gpu/drm/msm/msm_gem.c   | 48 +
>  drivers/gpu/drm/msm/msm_gem.h   |  8 +
>  drivers/gpu/drm/msm/msm_gem_vma.c   |  2 ++
>  include/uapi/drm/msm_drm.h  |  3 ++
>  6 files changed, 92 insertions(+)
>
> diff --git a/drivers/gpu/drm/msm/adreno/adreno_gpu.c 
> b/drivers/gpu/drm/msm/adreno/adreno_gpu.c
> index 6385ab06632f..4caae0229518 100644
> --- a/drivers/gpu/drm/msm/adreno/adreno_gpu.c
> +++ b/drivers/gpu/drm/msm/adreno/adreno_gpu.c
> @@ -281,6 +281,16 @@ int adreno_get_param(struct msm_gpu *gpu, struct 
> msm_file_private *ctx,
> case MSM_PARAM_SUSPENDS:
> *value = gpu->suspend_count;
> return 0;
> +   case MSM_PARAM_VA_START:
> +   if (ctx->aspace == gpu->aspace)
> +   return -EINVAL;
> +   *value = ctx->aspace->va_start;
> +   return 0;
> +   case MSM_PARAM_VA_SIZE:
> +   if (ctx->aspace == gpu->aspace)
> +   return -EINVAL;
> +   *value = ctx->aspace->va_size;
> +   return 0;
> default:
> DBG("%s: invalid param: %u", gpu->name, param);
> return -EINVAL;
> diff --git a/drivers/gpu/drm/msm/msm_drv.c b/drivers/gpu/drm/msm/msm_drv.c
> index a5eed5738ac8..7394312cf075 100644
> --- a/drivers/gpu/drm/msm/msm_drv.c
> +++ b/drivers/gpu/drm/msm/msm_drv.c
> @@ -719,6 +719,23 @@ static int msm_ioctl_gem_info_iova(struct drm_device 
> *dev,
> return msm_gem_get_iova(obj, ctx->aspace, iova);
>  }
>
> +static int msm_ioctl_gem_info_set_iova(struct drm_device *dev,
> +   struct drm_file *file, struct drm_gem_object *obj,
> +   uint64_t iova)
> +{
> +   struct msm_drm_private *priv = dev->dev_private;
> +   struct msm_file_private *ctx = file->driver_priv;
> +
> +   if (!priv->gpu)
> +   return -EINVAL;
> +
> +   /* Only supported if per-process address space is supported: */
> +   if (priv->gpu->aspace == ctx->aspace)
> +   return -EINVAL;
> +
> +   return msm_gem_set_iova(obj, ctx->aspace, iova);
> +}
> +
>  static int msm_ioctl_gem_info(struct drm_device *dev, void *data,
> struct drm_file *file)
>  {
> @@ -733,6 +750,7 @@ static int msm_ioctl_gem_info(struct drm_device *dev, 
> void *data,
> switch (args->info) {
> case MSM_INFO_GET_OFFSET:
> case MSM_INFO_GET_IOVA:
> +   case MSM_INFO_SET_IOVA:
> /* value returned as immediate, not pointer, so len==0: */
> if (args->len)
> return -EINVAL;
> @@ -757,6 +775,9 @@ static int msm_ioctl_gem_info(struct drm_device *dev, 
> void *data,
> case MSM_INFO_GET_IOVA:
> ret = msm_ioctl_gem_info_iova(dev, file, obj, &args->value);
> break;
> +   case MSM_INFO_SET_IOVA:
> +   ret = msm_ioctl_gem_info_set_iova(dev, file, obj, 
> args->value);
> +   break;
> case MSM_INFO_SET_NAME:
> /* length check should leave room for terminating null: */
> if (args->len >= sizeof(msm_obj->name)) {
> diff --git a/drivers/gpu/drm/msm/msm_gem.c b/drivers/gpu/drm/msm/msm_gem.c
> index bf4af17e2f1e..3122ba308f31 100644
> --- a/drivers/gpu/drm/msm/msm_gem.c
> +++ b/drivers/gpu/drm/msm/msm_gem.c
> @@ -525,6 +525,54 @@ int msm_gem_get_iova(struct drm_gem_object *obj,
> return ret;
>  }
>
> +static int clear_iova(struct drm_gem_object *obj,
> + struct msm_gem_address_space *aspace)
> +{
> +   struct msm_gem_vma *vma = lookup_vma(obj, aspace);
> +
> +   if (!vma)
> +   return 0;
> +
> +   if (vma->inuse)

msm_gem_vma_inuse() ?

> +   return -EBUSY;
> +
> +   msm_gem_purge_vma(vma->aspace, vma);
> +   msm_gem_close_vma(vma->aspace, vma);
> +   del_vma(vma);
> +
> +   return 0;
> +}
> +
> +/*
> + * Get the requested iova but don't pin it.  Fails i

Re: [Freedreno] [PATCH 9/9] drm/msm: Add a way for userspace to allocate GPU iova

2022-03-29 Thread Dmitry Osipenko


On 3/30/22 02:00, Rob Clark wrote:
> +static int msm_ioctl_gem_info_set_iova(struct drm_device *dev,
> + struct drm_file *file, struct drm_gem_object *obj,
> + uint64_t iova)
> +{
> + struct msm_drm_private *priv = dev->dev_private;
> + struct msm_file_private *ctx = file->driver_priv;
> +
> + if (!priv->gpu)
> + return -EINVAL;
> +
> + /* Only supported if per-process address space is supported: */
> + if (priv->gpu->aspace == ctx->aspace)
> + return -EINVAL;

nit: -EOPNOTSUPP ?


Re: [Freedreno] [PATCH 9/9] drm/msm: Add a way for userspace to allocate GPU iova

2022-03-29 Thread Rob Clark
On Tue, Mar 29, 2022 at 4:42 PM Dmitry Baryshkov
 wrote:
>
> On Wed, 30 Mar 2022 at 02:00, Rob Clark  wrote:
> >
> > From: Rob Clark 
> >
> > The motivation at this point is mainly native userspace mesa driver in a
> > VM guest.  The one remaining synchronous "hotpath" is buffer allocation,
> > because guest needs to wait to know the bo's iova before it can start
> > emitting cmdstream/state that references the new bo.  By allocating the
> > iova in the guest userspace, we no longer need to wait for a response
> > from the host, but can just rely on the allocation request being
> > processed before the cmdstream submission.  Allocation faulures (OoM,
>
> failures
>
> > etc) would just be treated as context-lost (ie. GL_GUILTY_CONTEXT_RESET)
> > or subsequent allocations (or readpix, etc) can raise GL_OUT_OF_MEMORY.
> >
> > Signed-off-by: Rob Clark 
>
> Reviewed-by: Dmitry Baryshkov 
>
> Minor nits (above and below).
>
> > ---
> >  drivers/gpu/drm/msm/adreno/adreno_gpu.c | 10 ++
> >  drivers/gpu/drm/msm/msm_drv.c   | 21 +++
> >  drivers/gpu/drm/msm/msm_gem.c   | 48 +
> >  drivers/gpu/drm/msm/msm_gem.h   |  8 +
> >  drivers/gpu/drm/msm/msm_gem_vma.c   |  2 ++
> >  include/uapi/drm/msm_drm.h  |  3 ++
> >  6 files changed, 92 insertions(+)
> >
> > diff --git a/drivers/gpu/drm/msm/adreno/adreno_gpu.c 
> > b/drivers/gpu/drm/msm/adreno/adreno_gpu.c
> > index 6385ab06632f..4caae0229518 100644
> > --- a/drivers/gpu/drm/msm/adreno/adreno_gpu.c
> > +++ b/drivers/gpu/drm/msm/adreno/adreno_gpu.c
> > @@ -281,6 +281,16 @@ int adreno_get_param(struct msm_gpu *gpu, struct 
> > msm_file_private *ctx,
> > case MSM_PARAM_SUSPENDS:
> > *value = gpu->suspend_count;
> > return 0;
> > +   case MSM_PARAM_VA_START:
> > +   if (ctx->aspace == gpu->aspace)
> > +   return -EINVAL;
> > +   *value = ctx->aspace->va_start;
> > +   return 0;
> > +   case MSM_PARAM_VA_SIZE:
> > +   if (ctx->aspace == gpu->aspace)
> > +   return -EINVAL;
> > +   *value = ctx->aspace->va_size;
> > +   return 0;
> > default:
> > DBG("%s: invalid param: %u", gpu->name, param);
> > return -EINVAL;
> > diff --git a/drivers/gpu/drm/msm/msm_drv.c b/drivers/gpu/drm/msm/msm_drv.c
> > index a5eed5738ac8..7394312cf075 100644
> > --- a/drivers/gpu/drm/msm/msm_drv.c
> > +++ b/drivers/gpu/drm/msm/msm_drv.c
> > @@ -719,6 +719,23 @@ static int msm_ioctl_gem_info_iova(struct drm_device 
> > *dev,
> > return msm_gem_get_iova(obj, ctx->aspace, iova);
> >  }
> >
> > +static int msm_ioctl_gem_info_set_iova(struct drm_device *dev,
> > +   struct drm_file *file, struct drm_gem_object *obj,
> > +   uint64_t iova)
> > +{
> > +   struct msm_drm_private *priv = dev->dev_private;
> > +   struct msm_file_private *ctx = file->driver_priv;
> > +
> > +   if (!priv->gpu)
> > +   return -EINVAL;
> > +
> > +   /* Only supported if per-process address space is supported: */
> > +   if (priv->gpu->aspace == ctx->aspace)
> > +   return -EINVAL;
> > +
> > +   return msm_gem_set_iova(obj, ctx->aspace, iova);
> > +}
> > +
> >  static int msm_ioctl_gem_info(struct drm_device *dev, void *data,
> > struct drm_file *file)
> >  {
> > @@ -733,6 +750,7 @@ static int msm_ioctl_gem_info(struct drm_device *dev, 
> > void *data,
> > switch (args->info) {
> > case MSM_INFO_GET_OFFSET:
> > case MSM_INFO_GET_IOVA:
> > +   case MSM_INFO_SET_IOVA:
> > /* value returned as immediate, not pointer, so len==0: */
> > if (args->len)
> > return -EINVAL;
> > @@ -757,6 +775,9 @@ static int msm_ioctl_gem_info(struct drm_device *dev, 
> > void *data,
> > case MSM_INFO_GET_IOVA:
> > ret = msm_ioctl_gem_info_iova(dev, file, obj, &args->value);
> > break;
> > +   case MSM_INFO_SET_IOVA:
> > +   ret = msm_ioctl_gem_info_set_iova(dev, file, obj, 
> > args->value);
> > +   break;
> > case MSM_INFO_SET_NAME:
> > /* length check should leave room for terminating null: */
> > if (args->len >= sizeof(msm_obj->name)) {
> > diff --git a/drivers/gpu/drm/msm/msm_gem.c b/drivers/gpu/drm/msm/msm_gem.c
> > index bf4af17e2f1e..3122ba308f31 100644
> > --- a/drivers/gpu/drm/msm/msm_gem.c
> > +++ b/drivers/gpu/drm/msm/msm_gem.c
> > @@ -525,6 +525,54 @@ int msm_gem_get_iova(struct drm_gem_object *obj,
> > return ret;
> >  }
> >
> > +static int clear_iova(struct drm_gem_object *obj,
> > + struct msm_gem_address_space *aspace)
> > +{
> > +   struct msm_gem_vma *vma = lookup_vma(obj, aspace);
> > +
> > +   if (!vma)
> > +   return 0;
> > +
> >