Re: [PATCH] drm/radeon: fix copy of uninitialized variable back to userspace

2021-03-02 Thread Christian König

Am 03.03.21 um 01:27 schrieb Colin King:

From: Colin Ian King 

Currently the ioctl command RADEON_INFO_SI_BACKEND_ENABLED_MASK can
copy back uninitialised data in value_tmp that pointer *value points
to. This can occur when rdev->family is less than CHIP_BONAIRE and
less than CHIP_TAHITI.  Fix this by adding in a missing -EINVAL
so that no invalid value is copied back to userspace.

Addresses-Coverity: ("Uninitialized scalar variable)
Cc: sta...@vger.kernel.org # 3.13+
Fixes: 439a1cfffe2c ("drm/radeon: expose render backend mask to the userspace")
Signed-off-by: Colin Ian King 


Reviewed-by: Christian König 

Let's hope that this doesn't break UAPI.

Christian.


---
  drivers/gpu/drm/radeon/radeon_kms.c | 1 +
  1 file changed, 1 insertion(+)

diff --git a/drivers/gpu/drm/radeon/radeon_kms.c 
b/drivers/gpu/drm/radeon/radeon_kms.c
index 2479d6ab7a36..58876bb4ef2a 100644
--- a/drivers/gpu/drm/radeon/radeon_kms.c
+++ b/drivers/gpu/drm/radeon/radeon_kms.c
@@ -518,6 +518,7 @@ int radeon_info_ioctl(struct drm_device *dev, void *data, 
struct drm_file *filp)
*value = rdev->config.si.backend_enable_mask;
} else {
DRM_DEBUG_KMS("BACKEND_ENABLED_MASK is si+ only!\n");
+   return -EINVAL;
}
break;
case RADEON_INFO_MAX_SCLK:


___
amd-gfx mailing list
amd-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/amd-gfx


[PATCH] drm/amdgpu: Check if FB BAR is enabled for ROM read

2021-03-02 Thread Lazar, Lijo
[AMD Public Use]

Some configurations don't have FB BAR enabled. Avoid reading ROM image
from FB BAR region in such cases.

Signed-off-by: Lijo Lazar mailto:lijo.la...@amd.com>>
Reviewed-by: Hawking Zhang mailto:hawking.zh...@amd.com>>
Reviewed-by: Feifei Xu mailto:feifei...@amd.com>>
---
drivers/gpu/drm/amd/amdgpu/amdgpu_bios.c | 4 
1 file changed, 4 insertions(+)

diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_bios.c 
b/drivers/gpu/drm/amd/amdgpu/amdgpu_bios.c
index efdf639f6593..f454a6bd0ed6 100644
--- a/drivers/gpu/drm/amd/amdgpu/amdgpu_bios.c
+++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_bios.c
@@ -97,6 +97,10 @@ static bool igp_read_bios_from_vram(struct amdgpu_device 
*adev)
if (amdgpu_device_need_post(adev))
return false;

+   /* FB BAR not enabled */
+   if (pci_resource_len(adev->pdev, 0) == 0)
+   return false;
+
adev->bios = NULL;
vram_base = pci_resource_start(adev->pdev, 0);
bios = ioremap_wc(vram_base, size);
--
2.29.2
___
amd-gfx mailing list
amd-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/amd-gfx


[PATCH v3 3/3] drm/amdgpu: correct DRM_ERROR for kvmalloc_array

2021-03-02 Thread Chen Li


This may avoid debug confusion.
Signed-off-by: Chen Li 
---
 drivers/gpu/drm/amd/amdgpu/amdgpu_cs.c | 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)

diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_cs.c 
b/drivers/gpu/drm/amd/amdgpu/amdgpu_cs.c
index d9ae2cb86bc7..b5c766998045 100644
--- a/drivers/gpu/drm/amd/amdgpu/amdgpu_cs.c
+++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_cs.c
@@ -559,7 +559,7 @@ static int amdgpu_cs_parser_bos(struct amdgpu_cs_parser *p,
sizeof(struct page *),
GFP_KERNEL | __GFP_ZERO);
if (!e->user_pages) {
-   DRM_ERROR("calloc failure\n");
+   DRM_ERROR("kvmalloc_array failure\n");
return -ENOMEM;
}
 
-- 
2.30.0



___
amd-gfx mailing list
amd-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/amd-gfx


[PATCH v3 2/3] drm/amdgpu: Use kvmalloc for CS chunks

2021-03-02 Thread Chen Li


The number of chunks/chunks_array may be passed in
by userspace and can be large.

Signed-off-by: Chen Li 
---
 drivers/gpu/drm/amd/amdgpu/amdgpu_cs.c | 12 ++--
 1 file changed, 6 insertions(+), 6 deletions(-)

diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_cs.c 
b/drivers/gpu/drm/amd/amdgpu/amdgpu_cs.c
index 3e240b952e79..d9ae2cb86bc7 100644
--- a/drivers/gpu/drm/amd/amdgpu/amdgpu_cs.c
+++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_cs.c
@@ -117,7 +117,7 @@ static int amdgpu_cs_parser_init(struct amdgpu_cs_parser 
*p, union drm_amdgpu_cs
if (cs->in.num_chunks == 0)
return 0;
 
-   chunk_array = kmalloc_array(cs->in.num_chunks, sizeof(uint64_t), 
GFP_KERNEL);
+   chunk_array = kvmalloc_array(cs->in.num_chunks, sizeof(uint64_t), 
GFP_KERNEL);
if (!chunk_array)
return -ENOMEM;
 
@@ -144,7 +144,7 @@ static int amdgpu_cs_parser_init(struct amdgpu_cs_parser 
*p, union drm_amdgpu_cs
}
 
p->nchunks = cs->in.num_chunks;
-   p->chunks = kmalloc_array(p->nchunks, sizeof(struct amdgpu_cs_chunk),
+   p->chunks = kvmalloc_array(p->nchunks, sizeof(struct amdgpu_cs_chunk),
GFP_KERNEL);
if (!p->chunks) {
ret = -ENOMEM;
@@ -238,7 +238,7 @@ static int amdgpu_cs_parser_init(struct amdgpu_cs_parser 
*p, union drm_amdgpu_cs
 
if (p->uf_entry.tv.bo)
p->job->uf_addr = uf_offset;
-   kfree(chunk_array);
+   kvfree(chunk_array);
 
/* Use this opportunity to fill in task info for the vm */
amdgpu_vm_set_task_info(vm);
@@ -250,11 +250,11 @@ static int amdgpu_cs_parser_init(struct amdgpu_cs_parser 
*p, union drm_amdgpu_cs
 free_partial_kdata:
for (; i >= 0; i--)
kvfree(p->chunks[i].kdata);
-   kfree(p->chunks);
+   kvfree(p->chunks);
p->chunks = NULL;
p->nchunks = 0;
 free_chunk:
-   kfree(chunk_array);
+   kvfree(chunk_array);
 
return ret;
 }
@@ -706,7 +706,7 @@ static void amdgpu_cs_parser_fini(struct amdgpu_cs_parser 
*parser, int error,
 
for (i = 0; i < parser->nchunks; i++)
kvfree(parser->chunks[i].kdata);
-   kfree(parser->chunks);
+   kvfree(parser->chunks);
if (parser->job)
amdgpu_job_free(parser->job);
if (parser->uf_entry.tv.bo) {
-- 
2.30.0



___
amd-gfx mailing list
amd-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/amd-gfx


[PATCH v3 1/3] drm/radeon: Use kvmalloc for CS chunks

2021-03-02 Thread Chen Li


The number of chunks/chunks_array may be passed in
by userspace and can be large.

Signed-off-by: Chen Li 
---
 drivers/gpu/drm/radeon/radeon_cs.c | 8 
 1 file changed, 4 insertions(+), 4 deletions(-)

diff --git a/drivers/gpu/drm/radeon/radeon_cs.c 
b/drivers/gpu/drm/radeon/radeon_cs.c
index 35e937d39b51..fb736ef9f9aa 100644
--- a/drivers/gpu/drm/radeon/radeon_cs.c
+++ b/drivers/gpu/drm/radeon/radeon_cs.c
@@ -288,7 +288,7 @@ int radeon_cs_parser_init(struct radeon_cs_parser *p, void 
*data)
p->chunk_relocs = NULL;
p->chunk_flags = NULL;
p->chunk_const_ib = NULL;
-   p->chunks_array = kcalloc(cs->num_chunks, sizeof(uint64_t), GFP_KERNEL);
+   p->chunks_array = kvmalloc_array(cs->num_chunks, sizeof(uint64_t), 
GFP_KERNEL);
if (p->chunks_array == NULL) {
return -ENOMEM;
}
@@ -299,7 +299,7 @@ int radeon_cs_parser_init(struct radeon_cs_parser *p, void 
*data)
}
p->cs_flags = 0;
p->nchunks = cs->num_chunks;
-   p->chunks = kcalloc(p->nchunks, sizeof(struct radeon_cs_chunk), 
GFP_KERNEL);
+   p->chunks = kvmalloc_array(p->nchunks, sizeof(struct radeon_cs_chunk), 
GFP_KERNEL);
if (p->chunks == NULL) {
return -ENOMEM;
}
@@ -452,8 +452,8 @@ static void radeon_cs_parser_fini(struct radeon_cs_parser 
*parser, int error, bo
kvfree(parser->vm_bos);
for (i = 0; i < parser->nchunks; i++)
kvfree(parser->chunks[i].kdata);
-   kfree(parser->chunks);
-   kfree(parser->chunks_array);
+   kvfree(parser->chunks);
+   kvfree(parser->chunks_array);
radeon_ib_free(parser->rdev, &parser->ib);
radeon_ib_free(parser->rdev, &parser->const_ib);
 }
-- 
2.30.0



___
amd-gfx mailing list
amd-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/amd-gfx


[PATCH v3 0/3] Use kvmalloc_array for radeon and amdgpu CS chunks

2021-03-02 Thread Chen Li


When testing kernel with trinity, the kernel turned to tainted in that radeon 
CS require large memory and order is over MAX_ORDER.

kvmalloc/kvmalloc_array should be used here in that it will fallback to vmalloc 
if necessary.

Chen Li (3):
  drm/radeon: Use kvmalloc for CS chunks
  drm/amdgpu: Use kvmalloc for CS chunks
  drm/amdgpu: correct DRM_ERROR for kvmalloc_array

Changelog:
  v1->v2:
* also use kvmalloc in amdgpu
* fix a DRM_ERROR message for kvmalloc_array.
  v2->v3:
* add missing kvfree for amdgpu CS

 drivers/gpu/drm/amd/amdgpu/amdgpu_cs.c | 14 +++---
 drivers/gpu/drm/radeon/radeon_cs.c |  8 
 2 files changed, 11 insertions(+), 11 deletions(-)

--
2.30.0


___
amd-gfx mailing list
amd-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/amd-gfx


Re: [PATCH v2 2/3] drm/amdgpu: Use kvmalloc for CS chunks【Suspected phishing email, please pay attention to password security】

2021-03-02 Thread Chen Li
On Wed, 03 Mar 2021 10:23:01 +0800,
Alex Deucher wrote:
>
> On Tue, Mar 2, 2021 at 9:16 PM Chen Li  wrote:
> >
> >
> > The number of chunks/chunks_array may be passed in
> > by userspace and can be large.
> >
>
> We also need to kvfree these.
Thanks for pointing out this! I will a add it in v3.
>
> Alex
>
> > Signed-off-by: Chen Li 
> > ---
> >  drivers/gpu/drm/amd/amdgpu/amdgpu_cs.c | 4 ++--
> >  1 file changed, 2 insertions(+), 2 deletions(-)
> >
> > diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_cs.c 
> > b/drivers/gpu/drm/amd/amdgpu/amdgpu_cs.c
> > index 3e240b952e79..aefb7e68977d 100644
> > --- a/drivers/gpu/drm/amd/amdgpu/amdgpu_cs.c
> > +++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_cs.c
> > @@ -117,7 +117,7 @@ static int amdgpu_cs_parser_init(struct 
> > amdgpu_cs_parser *p, union drm_amdgpu_cs
> > if (cs->in.num_chunks == 0)
> > return 0;
> >
> > -   chunk_array = kmalloc_array(cs->in.num_chunks, sizeof(uint64_t), 
> > GFP_KERNEL);
> > +   chunk_array = kvmalloc_array(cs->in.num_chunks, sizeof(uint64_t), 
> > GFP_KERNEL);
> > if (!chunk_array)
> > return -ENOMEM;
> >
> > @@ -144,7 +144,7 @@ static int amdgpu_cs_parser_init(struct 
> > amdgpu_cs_parser *p, union drm_amdgpu_cs
> > }
> >
> > p->nchunks = cs->in.num_chunks;
> > -   p->chunks = kmalloc_array(p->nchunks, sizeof(struct 
> > amdgpu_cs_chunk),
> > +   p->chunks = kvmalloc_array(p->nchunks, sizeof(struct 
> > amdgpu_cs_chunk),
> > GFP_KERNEL);
> > if (!p->chunks) {
> > ret = -ENOMEM;
> > --
> > 2.30.0
> >
> >
> >
> > ___
> > amd-gfx mailing list
> > amd-gfx@lists.freedesktop.org
> > https://lists.freedesktop.org/mailman/listinfo/amd-gfx
>
>


___
amd-gfx mailing list
amd-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/amd-gfx


Re: [PATCH v2 2/3] drm/amdgpu: Use kvmalloc for CS chunks

2021-03-02 Thread Alex Deucher
On Tue, Mar 2, 2021 at 9:16 PM Chen Li  wrote:
>
>
> The number of chunks/chunks_array may be passed in
> by userspace and can be large.
>

We also need to kvfree these.

Alex

> Signed-off-by: Chen Li 
> ---
>  drivers/gpu/drm/amd/amdgpu/amdgpu_cs.c | 4 ++--
>  1 file changed, 2 insertions(+), 2 deletions(-)
>
> diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_cs.c 
> b/drivers/gpu/drm/amd/amdgpu/amdgpu_cs.c
> index 3e240b952e79..aefb7e68977d 100644
> --- a/drivers/gpu/drm/amd/amdgpu/amdgpu_cs.c
> +++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_cs.c
> @@ -117,7 +117,7 @@ static int amdgpu_cs_parser_init(struct amdgpu_cs_parser 
> *p, union drm_amdgpu_cs
> if (cs->in.num_chunks == 0)
> return 0;
>
> -   chunk_array = kmalloc_array(cs->in.num_chunks, sizeof(uint64_t), 
> GFP_KERNEL);
> +   chunk_array = kvmalloc_array(cs->in.num_chunks, sizeof(uint64_t), 
> GFP_KERNEL);
> if (!chunk_array)
> return -ENOMEM;
>
> @@ -144,7 +144,7 @@ static int amdgpu_cs_parser_init(struct amdgpu_cs_parser 
> *p, union drm_amdgpu_cs
> }
>
> p->nchunks = cs->in.num_chunks;
> -   p->chunks = kmalloc_array(p->nchunks, sizeof(struct amdgpu_cs_chunk),
> +   p->chunks = kvmalloc_array(p->nchunks, sizeof(struct amdgpu_cs_chunk),
> GFP_KERNEL);
> if (!p->chunks) {
> ret = -ENOMEM;
> --
> 2.30.0
>
>
>
> ___
> amd-gfx mailing list
> amd-gfx@lists.freedesktop.org
> https://lists.freedesktop.org/mailman/listinfo/amd-gfx
___
amd-gfx mailing list
amd-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/amd-gfx


[PATCH v2 3/3] drm/amdgpu: correct DRM_ERROR for kvmalloc_array

2021-03-02 Thread Chen Li


This may avoid debug confusion.
Signed-off-by: Chen Li 
---
 drivers/gpu/drm/amd/amdgpu/amdgpu_cs.c | 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)

diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_cs.c 
b/drivers/gpu/drm/amd/amdgpu/amdgpu_cs.c
index aefb7e68977d..a1df980864a6 100644
--- a/drivers/gpu/drm/amd/amdgpu/amdgpu_cs.c
+++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_cs.c
@@ -559,7 +559,7 @@ static int amdgpu_cs_parser_bos(struct amdgpu_cs_parser *p,
sizeof(struct page *),
GFP_KERNEL | __GFP_ZERO);
if (!e->user_pages) {
-   DRM_ERROR("calloc failure\n");
+   DRM_ERROR("kvmalloc_array failure\n");
return -ENOMEM;
}
 
-- 
2.30.0



___
amd-gfx mailing list
amd-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/amd-gfx


Re: [PATCH] drm/radeon: Use kvmalloc for CS chunks【Suspected phishing email, please pay attention to password security】

2021-03-02 Thread Chen Li
On Tue, 02 Mar 2021 22:13:11 +0800,
Christian König wrote:
>
> Am 02.03.21 um 07:42 schrieb Chen Li:
> > The number of chunks/chunks_array may be passed in
> > by userspace and can be large.
>
> I'm wondering if we shouldn't rather restrict the number of chunks.

If the number is restrict, will there be any risk and what's the proper number 
here?
>
> > It has been observed to cause kcalloc failures from trinity fuzzy test:
> >
> > ```
> >   WARNING: CPU: 0 PID: 5487 at mm/page_alloc.c:4385
> >   __alloc_pages_nodemask+0x2d8/0x14d0
> >
> > ..
> >
> > Trace:
> > __warn.part.4+0x11c/0x174
> > __alloc_pages_nodemask+0x2d8/0x14d0
> > warn_slowpath_null+0x84/0xb0
> > __alloc_pages_nodemask+0x2d8/0x14d0
> > __alloc_pages_nodemask+0x2d8/0x14d0
> > alloc_pages_current+0xf0/0x1b0
> > free_buffer_head+0x88/0xf0
> > jbd2_journal_try_to_free_buffers+0x1e0/0x2a0
> > ext4_releasepage+0x84/0x140
> > release_pages+0x414/0x4c0
> > release_pages+0x42c/0x4c0
> > __find_get_block+0x1a4/0x5b0
> > alloc_pages_current+0xcc/0x1b0
> > kmalloc_order+0x30/0xb0
> > __kmalloc+0x300/0x390
> > kmalloc_order_trace+0x48/0x110
> > __kmalloc+0x300/0x390
> > radeon_cs_parser_init.part.1+0x74/0x670 [radeon]
> > crypto_shash_update+0x5c/0x1c0
> > radeon_cs_parser_init.part.1+0x74/0x670 [radeon]
> > __wake_up_common_lock+0xb8/0x210
> > radeon_cs_ioctl+0xc8/0xb80 [radeon]
> > radeon_cs_ioctl+0x50/0xb80 [radeon]
> > drm_ioctl_kernel+0xf4/0x160
> > radeon_cs_ioctl+0x0/0xb80 [radeon]
> > drm_ioctl_kernel+0xa0/0x160
> > drm_ioctl+0x2dc/0x4f0
> > radeon_drm_ioctl+0x80/0xf0 [radeon]
> > new_sync_write+0x120/0x1c0
> > timerqueue_add+0x88/0x140
> > do_vfs_ioctl+0xe4/0x990
> > ksys_ioctl+0xdc/0x110
> > ksys_ioctl+0x78/0x110
> > sys_ioctl+0x2c/0x50
> > entSys+0xa0/0xc0
>
> Please drop the backtrace, it doesn't add any value to the commit log.

Ok, will drop it in v2.
>
> > ```
> >
> > Obviously, the required order in this case is larger than MAX_ORDER.
> > So, just use kvmalloc instead.
> >
> > Signed-off-by: Chen Li 
>
> Reviewed-by: Christian König 
>
> The same patch should probably applied to amdgpu as well if we don't already 
> use
> kvmalloc there as well.
>

Fair enough, will add it into a v2 as a series with this patch.
> Regards,
> Christian.
>
> > ---
> >   drivers/gpu/drm/radeon/radeon_cs.c | 8 
> >   1 file changed, 4 insertions(+), 4 deletions(-)
> >
> > diff --git a/drivers/gpu/drm/radeon/radeon_cs.c 
> > b/drivers/gpu/drm/radeon/radeon_cs.c
> > index 35e937d39b51..fb736ef9f9aa 100644
> > --- a/drivers/gpu/drm/radeon/radeon_cs.c
> > +++ b/drivers/gpu/drm/radeon/radeon_cs.c
> > @@ -288,7 +288,7 @@ int radeon_cs_parser_init(struct radeon_cs_parser *p, 
> > void *data)
> > p->chunk_relocs = NULL;
> > p->chunk_flags = NULL;
> > p->chunk_const_ib = NULL;
> > -   p->chunks_array = kcalloc(cs->num_chunks, sizeof(uint64_t), GFP_KERNEL);
> > +   p->chunks_array = kvmalloc_array(cs->num_chunks, sizeof(uint64_t), 
> > GFP_KERNEL);
> > if (p->chunks_array == NULL) {
> > return -ENOMEM;
> > }
> > @@ -299,7 +299,7 @@ int radeon_cs_parser_init(struct radeon_cs_parser *p, 
> > void *data)
> > }
> > p->cs_flags = 0;
> > p->nchunks = cs->num_chunks;
> > -   p->chunks = kcalloc(p->nchunks, sizeof(struct radeon_cs_chunk), 
> > GFP_KERNEL);
> > +   p->chunks = kvmalloc_array(p->nchunks, sizeof(struct radeon_cs_chunk), 
> > GFP_KERNEL);
> > if (p->chunks == NULL) {
> > return -ENOMEM;
> > }
> > @@ -452,8 +452,8 @@ static void radeon_cs_parser_fini(struct 
> > radeon_cs_parser *parser, int error, bo
> > kvfree(parser->vm_bos);
> > for (i = 0; i < parser->nchunks; i++)
> > kvfree(parser->chunks[i].kdata);
> > -   kfree(parser->chunks);
> > -   kfree(parser->chunks_array);
> > +   kvfree(parser->chunks);
> > +   kvfree(parser->chunks_array);
> > radeon_ib_free(parser->rdev, &parser->ib);
> > radeon_ib_free(parser->rdev, &parser->const_ib);
> >   }
> > --
> > 2.30.0
> >
> >
> > ___
> > amd-gfx mailing list
> > amd-gfx@lists.freedesktop.org
> > https://lists.freedesktop.org/mailman/listinfo/amd-gfx
>
>
>

Regards,
Chen Li.


___
amd-gfx mailing list
amd-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/amd-gfx


[PATCH v2 1/3] drm/radeon: Use kvmalloc for CS chunks

2021-03-02 Thread Chen Li


The number of chunks/chunks_array may be passed in
by userspace and can be large.

Signed-off-by: Chen Li 
---
 drivers/gpu/drm/radeon/radeon_cs.c | 8 
 1 file changed, 4 insertions(+), 4 deletions(-)

diff --git a/drivers/gpu/drm/radeon/radeon_cs.c 
b/drivers/gpu/drm/radeon/radeon_cs.c
index 35e937d39b51..fb736ef9f9aa 100644
--- a/drivers/gpu/drm/radeon/radeon_cs.c
+++ b/drivers/gpu/drm/radeon/radeon_cs.c
@@ -288,7 +288,7 @@ int radeon_cs_parser_init(struct radeon_cs_parser *p, void 
*data)
p->chunk_relocs = NULL;
p->chunk_flags = NULL;
p->chunk_const_ib = NULL;
-   p->chunks_array = kcalloc(cs->num_chunks, sizeof(uint64_t), GFP_KERNEL);
+   p->chunks_array = kvmalloc_array(cs->num_chunks, sizeof(uint64_t), 
GFP_KERNEL);
if (p->chunks_array == NULL) {
return -ENOMEM;
}
@@ -299,7 +299,7 @@ int radeon_cs_parser_init(struct radeon_cs_parser *p, void 
*data)
}
p->cs_flags = 0;
p->nchunks = cs->num_chunks;
-   p->chunks = kcalloc(p->nchunks, sizeof(struct radeon_cs_chunk), 
GFP_KERNEL);
+   p->chunks = kvmalloc_array(p->nchunks, sizeof(struct radeon_cs_chunk), 
GFP_KERNEL);
if (p->chunks == NULL) {
return -ENOMEM;
}
@@ -452,8 +452,8 @@ static void radeon_cs_parser_fini(struct radeon_cs_parser 
*parser, int error, bo
kvfree(parser->vm_bos);
for (i = 0; i < parser->nchunks; i++)
kvfree(parser->chunks[i].kdata);
-   kfree(parser->chunks);
-   kfree(parser->chunks_array);
+   kvfree(parser->chunks);
+   kvfree(parser->chunks_array);
radeon_ib_free(parser->rdev, &parser->ib);
radeon_ib_free(parser->rdev, &parser->const_ib);
 }
-- 
2.30.0



___
amd-gfx mailing list
amd-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/amd-gfx


[PATCH] drm/radeon: fix copy of uninitialized variable back to userspace

2021-03-02 Thread Colin King
From: Colin Ian King 

Currently the ioctl command RADEON_INFO_SI_BACKEND_ENABLED_MASK can
copy back uninitialised data in value_tmp that pointer *value points
to. This can occur when rdev->family is less than CHIP_BONAIRE and
less than CHIP_TAHITI.  Fix this by adding in a missing -EINVAL
so that no invalid value is copied back to userspace.

Addresses-Coverity: ("Uninitialized scalar variable)
Cc: sta...@vger.kernel.org # 3.13+
Fixes: 439a1cfffe2c ("drm/radeon: expose render backend mask to the userspace")
Signed-off-by: Colin Ian King 
---
 drivers/gpu/drm/radeon/radeon_kms.c | 1 +
 1 file changed, 1 insertion(+)

diff --git a/drivers/gpu/drm/radeon/radeon_kms.c 
b/drivers/gpu/drm/radeon/radeon_kms.c
index 2479d6ab7a36..58876bb4ef2a 100644
--- a/drivers/gpu/drm/radeon/radeon_kms.c
+++ b/drivers/gpu/drm/radeon/radeon_kms.c
@@ -518,6 +518,7 @@ int radeon_info_ioctl(struct drm_device *dev, void *data, 
struct drm_file *filp)
*value = rdev->config.si.backend_enable_mask;
} else {
DRM_DEBUG_KMS("BACKEND_ENABLED_MASK is si+ only!\n");
+   return -EINVAL;
}
break;
case RADEON_INFO_MAX_SCLK:
-- 
2.30.0

___
amd-gfx mailing list
amd-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/amd-gfx


[PATCH v2 2/3] drm/amdgpu: Use kvmalloc for CS chunks

2021-03-02 Thread Chen Li


The number of chunks/chunks_array may be passed in
by userspace and can be large.

Signed-off-by: Chen Li 
---
 drivers/gpu/drm/amd/amdgpu/amdgpu_cs.c | 4 ++--
 1 file changed, 2 insertions(+), 2 deletions(-)

diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_cs.c 
b/drivers/gpu/drm/amd/amdgpu/amdgpu_cs.c
index 3e240b952e79..aefb7e68977d 100644
--- a/drivers/gpu/drm/amd/amdgpu/amdgpu_cs.c
+++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_cs.c
@@ -117,7 +117,7 @@ static int amdgpu_cs_parser_init(struct amdgpu_cs_parser 
*p, union drm_amdgpu_cs
if (cs->in.num_chunks == 0)
return 0;
 
-   chunk_array = kmalloc_array(cs->in.num_chunks, sizeof(uint64_t), 
GFP_KERNEL);
+   chunk_array = kvmalloc_array(cs->in.num_chunks, sizeof(uint64_t), 
GFP_KERNEL);
if (!chunk_array)
return -ENOMEM;
 
@@ -144,7 +144,7 @@ static int amdgpu_cs_parser_init(struct amdgpu_cs_parser 
*p, union drm_amdgpu_cs
}
 
p->nchunks = cs->in.num_chunks;
-   p->chunks = kmalloc_array(p->nchunks, sizeof(struct amdgpu_cs_chunk),
+   p->chunks = kvmalloc_array(p->nchunks, sizeof(struct amdgpu_cs_chunk),
GFP_KERNEL);
if (!p->chunks) {
ret = -ENOMEM;
-- 
2.30.0



___
amd-gfx mailing list
amd-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/amd-gfx


[PATCH v2 0/3] Use kvmalloc_array for radeon and amdgpu CS chunks

2021-03-02 Thread Chen Li


When testing kernel with trinity, the kernel turned to tainted in that radeon 
CS require large memory and order is over MAX_ORDER.

kvmalloc/kvmalloc_array should be used here in that it will fallback to vmalloc 
if necessary.

Chen Li (3):
  drm/radeon: Use kvmalloc for CS chunks
  drm/amdgpu: Use kvmalloc for CS chunks
  drm/amdgpu: correct DRM_ERROR for kvmalloc_array

Changelog:
  v1->v2:
* also use kvmalloc in amdgpu
* fix a DRM_ERROR message for kvmalloc_array.
 drivers/gpu/drm/amd/amdgpu/amdgpu_cs.c | 6 +++---
 drivers/gpu/drm/radeon/radeon_cs.c | 8 
 2 files changed, 7 insertions(+), 7 deletions(-)

-- 
2.30.0



___
amd-gfx mailing list
amd-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/amd-gfx


Re: [PATCH] drm/ttm: ioremap buffer according to TTM mem caching setting

2021-03-02 Thread Dave Airlie
On Wed, 3 Mar 2021 at 08:45, Zeng, Oak  wrote:
>
> [AMD Official Use Only - Internal Distribution Only]
>
>
> Hi Daniel, Thomas, Dan,
>
>
>
> Does below message mean the calling ioremap_cache failed intel’s driver 
> build? I can see both ioremap_cache and ioremap_wc are defined in 
> arch/x86/mm/ioremap.c – why ioremap_wc doesn’t break intel driver’s build?

Just to clear up confusion here, the linux kernel robot is hosted by
Intel it does not test intel driver builds exclusively, it tests a lot
of different builds across lots of different architectures,.

If the robot complains it's because your patch breaks in the
configuration it describes, take the time to read that configuration
info and realise it's nothing to do with Intel at all.

Dave.
___
amd-gfx mailing list
amd-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/amd-gfx


RE: [PATCH] drm/ttm: ioremap buffer according to TTM mem caching setting

2021-03-02 Thread Zeng, Oak
[AMD Official Use Only - Internal Distribution Only]

Hi Daniel, Thomas, Dan,

Does below message mean the calling ioremap_cache failed intel's driver build? 
I can see both ioremap_cache and ioremap_wc are defined in 
arch/x86/mm/ioremap.c - why ioremap_wc doesn't break intel driver's build?

Are we supposed to use memremap (offset, size, MEMREMAP_WB) to replace 
ioremap_cache? When I read here https://lwn.net/Articles/653585/ I felt that 
ioremap_cache returns an address annotated with _iomem while memremap returns 
an address without __iomem annotation. In our use case, GPU memory is treated 
as UEFI SPM (specific purpose memory). I am not very sure whether memremap 
(thus no __iomem annotation) is the right thing to do. What I am sure is, we 
have tested ioremap_cache and it works fine on AMD system.

I will send out a test patch replacing ioremap_cache with ioremap_wc, to 
trigger Intel build robot to see whether it fails Intel build. I suppose it 
will not fail Intel build.

Regards,
Oak

From: Christian König 
Sent: Tuesday, March 2, 2021 6:31 AM
To: amd-gfx@lists.freedesktop.org; dri-de...@lists.freedesktop.org; Daniel 
Vetter ; Dave Airlie ; Thomas Hellström 
(Intel) 
Cc: Zeng, Oak ; kbuild-...@lists.01.org; Kuehling, Felix 
; Kasiviswanathan, Harish 
; Deucher, Alexander 
; Huang, JinHuiEric ; 
Koenig, Christian 
Subject: Re: [PATCH] drm/ttm: ioremap buffer according to TTM mem caching 
setting

Hi guys,

adding the usual suspects direct. Does anybody of hand know how to check if an 
architecture supports ioremap_cache()?

For now we only need this on X86, but I would feel better if we don't use an 
#ifdef here.

Regards,
Christian.
Am 02.03.21 um 05:12 schrieb kernel test robot:

Hi Oak,



Thank you for the patch! Yet something to improve:



[auto build test ERROR on drm-intel/for-linux-next]

[also build test ERROR on drm-tip/drm-tip linus/master v5.12-rc1 next-20210302]

[cannot apply to tegra-drm/drm/tegra/for-next drm-exynos/exynos-drm-next 
drm/drm-next]

[If your patch is applied to the wrong git tree, kindly drop us a note.

And when submitting patch, we suggest to use '--base' as documented in

https://git-scm.com/docs/git-format-patch<https://nam11.safelinks.protection.outlook.com/?url=https%3A%2F%2Fgit-scm.com%2Fdocs%2Fgit-format-patch&data=04%7C01%7COak.Zeng%40amd.com%7C08f51e87e36c4de858bc08d8dd6eb16b%7C3dd8961fe4884e608e11a82d994e183d%7C0%7C0%7C637502814793168696%7CUnknown%7CTWFpbGZsb3d8eyJWIjoiMC4wLjAwMDAiLCJQIjoiV2luMzIiLCJBTiI6Ik1haWwiLCJXVCI6Mn0%3D%7C1000&sdata=p4iynMPvZGknfSGSyZnXV3kLwScMLbPDB8zVsmxhtk0%3D&reserved=0>]



url:
https://github.com/0day-ci/linux/commits/Oak-Zeng/drm-ttm-ioremap-buffer-according-to-TTM-mem-caching-setting/20210302-064500<https://nam11.safelinks.protection.outlook.com/?url=https%3A%2F%2Fgithub.com%2F0day-ci%2Flinux%2Fcommits%2FOak-Zeng%2Fdrm-ttm-ioremap-buffer-according-to-TTM-mem-caching-setting%2F20210302-064500&data=04%7C01%7COak.Zeng%40amd.com%7C08f51e87e36c4de858bc08d8dd6eb16b%7C3dd8961fe4884e608e11a82d994e183d%7C0%7C0%7C637502814793178689%7CUnknown%7CTWFpbGZsb3d8eyJWIjoiMC4wLjAwMDAiLCJQIjoiV2luMzIiLCJBTiI6Ik1haWwiLCJXVCI6Mn0%3D%7C1000&sdata=2sc4jZR3bVRF0xDDqNOtUcNR9qiJMF2ATmCDAX%2BSWrQ%3D&reserved=0>

base:   git://anongit.freedesktop.org/drm-intel for-linux-next

config: parisc-randconfig-r012-20210302 (attached as .config)

compiler: hppa-linux-gcc (GCC) 9.3.0

reproduce (this is a W=1 build):

wget 
https://raw.githubusercontent.com/intel/lkp-tests/master/sbin/make.cross<https://nam11.safelinks.protection.outlook.com/?url=https%3A%2F%2Fraw.githubusercontent.com%2Fintel%2Flkp-tests%2Fmaster%2Fsbin%2Fmake.cross&data=04%7C01%7COak.Zeng%40amd.com%7C08f51e87e36c4de858bc08d8dd6eb16b%7C3dd8961fe4884e608e11a82d994e183d%7C0%7C0%7C637502814793178689%7CUnknown%7CTWFpbGZsb3d8eyJWIjoiMC4wLjAwMDAiLCJQIjoiV2luMzIiLCJBTiI6Ik1haWwiLCJXVCI6Mn0%3D%7C1000&sdata=uILcLE%2F24bhSU%2Bo5GmWGAK6s6xDFivP6lrm6JgtM50Y%3D&reserved=0>
 -O ~/bin/make.cross

chmod +x ~/bin/make.cross

# 
https://github.com/0day-ci/linux/commit/225bb3711439ec559dd72ae5af8e62d34ea60a64<https://nam11.safelinks.protection.outlook.com/?url=https%3A%2F%2Fgithub.com%2F0day-ci%2Flinux%2Fcommit%2F225bb3711439ec559dd72ae5af8e62d34ea60a64&data=04%7C01%7COak.Zeng%40amd.com%7C08f51e87e36c4de858bc08d8dd6eb16b%7C3dd8961fe4884e608e11a82d994e183d%7C0%7C0%7C637502814793188685%7CUnknown%7CTWFpbGZsb3d8eyJWIjoiMC4wLjAwMDAiLCJQIjoiV2luMzIiLCJBTiI6Ik1haWwiLCJXVCI6Mn0%3D%7C1000&sdata=2TOSPuKEMRcZjMfxO9lxgwFxgXwHqERCOgRednI7OE8%3D&reserved=0>

git remote add linux-review 
https://github.com/0day-ci/linux<https://nam11.safelinks.protection.outlook.com/?url=https%3A%2F%2Fgithub.com%2F0day-ci%2Flinux&data=04%7C01%7COak.Zeng%40amd.com%7C08f51e87e36c4de858bc08d8dd6eb16b%7C3dd8961fe4884e608e11a82d994e183d%7C0%7C0%7C637502814793188685%7CUnknown%7CTWFpbGZsb3d8eyJWIjoiMC4wLjAwM

Re: Build regressions/improvements in v5.12-rc1

2021-03-02 Thread Geert Uytterhoeven
Hi Alex,

On Tue, Mar 2, 2021 at 8:30 PM Alex Deucher  wrote:
> On Mon, Mar 1, 2021 at 9:21 AM Geert Uytterhoeven  
> wrote:
> > On Mon, 1 Mar 2021, Geert Uytterhoeven wrote:
> > > Below is the list of build error/warning regressions/improvements in
> > > v5.12-rc1[1] compared to v5.11[2].
> > >
> > > Summarized:
> > >  - build errors: +2/-0
> >
> > > [1] 
> > > http://kisskb.ellerman.id.au/kisskb/branch/linus/head/fe07bfda2fb9cdef8a4d4008a409bb02f35f1bd8/
> > >  (all 192 configs)
> > > [2] 
> > > http://kisskb.ellerman.id.au/kisskb/branch/linus/head/f40ddce88593482919761f74910f42f4b84c004b/
> > >  (all 192 configs)
> > >
> > >
> > > *** ERRORS ***
> > >
> > > 2 error regressions:
> > >  + 
> > > /kisskb/src/drivers/gpu/drm/amd/amdgpu/../display/dc/calcs/dcn_calcs.c: 
> > > error: implicit declaration of function 'disable_kernel_vsx' 
> > > [-Werror=implicit-function-declaration]:  => 674:2
> > >  + 
> > > /kisskb/src/drivers/gpu/drm/amd/amdgpu/../display/dc/calcs/dcn_calcs.c: 
> > > error: implicit declaration of function 'enable_kernel_vsx' 
> > > [-Werror=implicit-function-declaration]:  => 638:2
> >
> > powerpc-gcc4.9/ppc64_book3e_allmodconfig
> >
> > This was fixed in v5.11-rc1, but reappeared in v5.12-rc1?
>
> Do you know what fixed in for 5.11?  I guess for PPC64 we depend on 
> CONFIG_VSX?

Looking at the kisskb build logs for v5.11*, it seems compilation never
got to drivers/gpu/drm/ due to internal compiler errors that weren't caught
by my scripts.  So the errors listed above were not really fixed.

Gr{oetje,eeting}s,

Geert

-- 
Geert Uytterhoeven -- There's lots of Linux beyond ia32 -- ge...@linux-m68k.org

In personal conversations with technical people, I call myself a hacker. But
when I'm talking to journalists I just say "programmer" or something like that.
-- Linus Torvalds
___
amd-gfx mailing list
amd-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/amd-gfx


Re: Build regressions/improvements in v5.12-rc1

2021-03-02 Thread Alex Deucher
On Mon, Mar 1, 2021 at 9:21 AM Geert Uytterhoeven  wrote:
>
> On Mon, 1 Mar 2021, Geert Uytterhoeven wrote:
> > Below is the list of build error/warning regressions/improvements in
> > v5.12-rc1[1] compared to v5.11[2].
> >
> > Summarized:
> >  - build errors: +2/-0
>
> > [1] 
> > http://kisskb.ellerman.id.au/kisskb/branch/linus/head/fe07bfda2fb9cdef8a4d4008a409bb02f35f1bd8/
> >  (all 192 configs)
> > [2] 
> > http://kisskb.ellerman.id.au/kisskb/branch/linus/head/f40ddce88593482919761f74910f42f4b84c004b/
> >  (all 192 configs)
> >
> >
> > *** ERRORS ***
> >
> > 2 error regressions:
> >  + /kisskb/src/drivers/gpu/drm/amd/amdgpu/../display/dc/calcs/dcn_calcs.c: 
> > error: implicit declaration of function 'disable_kernel_vsx' 
> > [-Werror=implicit-function-declaration]:  => 674:2
> >  + /kisskb/src/drivers/gpu/drm/amd/amdgpu/../display/dc/calcs/dcn_calcs.c: 
> > error: implicit declaration of function 'enable_kernel_vsx' 
> > [-Werror=implicit-function-declaration]:  => 638:2
>
> powerpc-gcc4.9/ppc64_book3e_allmodconfig
>
> This was fixed in v5.11-rc1, but reappeared in v5.12-rc1?

Do you know what fixed in for 5.11?  I guess for PPC64 we depend on CONFIG_VSX?

Alex

>
> Gr{oetje,eeting}s,
>
> Geert
>
> --
> Geert Uytterhoeven -- There's lots of Linux beyond ia32 -- 
> ge...@linux-m68k.org
>
> In personal conversations with technical people, I call myself a hacker. But
> when I'm talking to journalists I just say "programmer" or something like 
> that.
> -- Linus Torvalds
> ___
> amd-gfx mailing list
> amd-gfx@lists.freedesktop.org
> https://lists.freedesktop.org/mailman/listinfo/amd-gfx
___
amd-gfx mailing list
amd-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/amd-gfx


Re: [PATCH] drm/amd/display: Fix off by one in hdmi_14_process_transaction()

2021-03-02 Thread Lakha, Bhawanpreet
[AMD Official Use Only - Internal Distribution Only]

Thanks

Reviewed-by: Bhawanpreet Lakha 

From: Dan Carpenter 
Sent: March 2, 2021 6:15 AM
To: Wentland, Harry ; Lakha, Bhawanpreet 

Cc: Li, Sun peng (Leo) ; Deucher, Alexander 
; Koenig, Christian ; 
David Airlie ; Daniel Vetter ; Dan Carpenter 
; Lakha, Bhawanpreet ; 
Siqueira, Rodrigo ; Liu, Wenjing 
; amd-gfx@lists.freedesktop.org 
; dri-de...@lists.freedesktop.org 
; kernel-janit...@vger.kernel.org 

Subject: [PATCH] drm/amd/display: Fix off by one in 
hdmi_14_process_transaction()

The hdcp_i2c_offsets[] array did not have an entry for
HDCP_MESSAGE_ID_WRITE_CONTENT_STREAM_TYPE so it led to an off by one
read overflow.  I added an entry and copied the 0x0 value for the offset
from similar code in drivers/gpu/drm/amd/display/modules/hdcp/hdcp_ddc.c.

I also declared several of these arrays as having HDCP_MESSAGE_ID_MAX
entries.  This doesn't change the code, but it's just a belt and
suspenders approach to try future proof the code.

Fixes: 4c283fdac08a ("drm/amd/display: Add HDCP module")
Signed-off-by: Dan Carpenter 
---
>From static analysis, as mentioned in the commit message the offset
is basically an educated guess.

I reported this bug on Apr 16, 2020 but I guess we lost take of it.

 drivers/gpu/drm/amd/display/dc/hdcp/hdcp_msg.c | 9 +
 1 file changed, 5 insertions(+), 4 deletions(-)

diff --git a/drivers/gpu/drm/amd/display/dc/hdcp/hdcp_msg.c 
b/drivers/gpu/drm/amd/display/dc/hdcp/hdcp_msg.c
index 5e384a8a83dc..51855a2624cf 100644
--- a/drivers/gpu/drm/amd/display/dc/hdcp/hdcp_msg.c
+++ b/drivers/gpu/drm/amd/display/dc/hdcp/hdcp_msg.c
@@ -39,7 +39,7 @@
 #define HDCP14_KSV_SIZE 5
 #define HDCP14_MAX_KSV_FIFO_SIZE 127*HDCP14_KSV_SIZE

-static const bool hdcp_cmd_is_read[] = {
+static const bool hdcp_cmd_is_read[HDCP_MESSAGE_ID_MAX] = {
 [HDCP_MESSAGE_ID_READ_BKSV] = true,
 [HDCP_MESSAGE_ID_READ_RI_R0] = true,
 [HDCP_MESSAGE_ID_READ_PJ] = true,
@@ -75,7 +75,7 @@ static const bool hdcp_cmd_is_read[] = {
 [HDCP_MESSAGE_ID_WRITE_CONTENT_STREAM_TYPE] = false
 };

-static const uint8_t hdcp_i2c_offsets[] = {
+static const uint8_t hdcp_i2c_offsets[HDCP_MESSAGE_ID_MAX] = {
 [HDCP_MESSAGE_ID_READ_BKSV] = 0x0,
 [HDCP_MESSAGE_ID_READ_RI_R0] = 0x8,
 [HDCP_MESSAGE_ID_READ_PJ] = 0xA,
@@ -106,7 +106,8 @@ static const uint8_t hdcp_i2c_offsets[] = {
 [HDCP_MESSAGE_ID_WRITE_REPEATER_AUTH_SEND_ACK] = 0x60,
 [HDCP_MESSAGE_ID_WRITE_REPEATER_AUTH_STREAM_MANAGE] = 0x60,
 [HDCP_MESSAGE_ID_READ_REPEATER_AUTH_STREAM_READY] = 0x80,
-   [HDCP_MESSAGE_ID_READ_RXSTATUS] = 0x70
+   [HDCP_MESSAGE_ID_READ_RXSTATUS] = 0x70,
+   [HDCP_MESSAGE_ID_WRITE_CONTENT_STREAM_TYPE] = 0x0,
 };

 struct protection_properties {
@@ -184,7 +185,7 @@ static const struct protection_properties 
hdmi_14_protection = {
 .process_transaction = hdmi_14_process_transaction
 };

-static const uint32_t hdcp_dpcd_addrs[] = {
+static const uint32_t hdcp_dpcd_addrs[HDCP_MESSAGE_ID_MAX] = {
 [HDCP_MESSAGE_ID_READ_BKSV] = 0x68000,
 [HDCP_MESSAGE_ID_READ_RI_R0] = 0x68005,
 [HDCP_MESSAGE_ID_READ_PJ] = 0x,
--
2.30.1

___
amd-gfx mailing list
amd-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/amd-gfx


Re: [PATCH] drm/amd/display: Remove unnecessary conversion to bool

2021-03-02 Thread Alex Deucher
Applied.  Thanks!

Alex

On Mon, Mar 1, 2021 at 1:50 AM Jiapeng Chong
 wrote:
>
> Fix the following coccicheck warnings:
>
> ./drivers/gpu/drm/amd/display/dc/dcn30/dcn30_dpp_cm.c:298:33-38:
> WARNING: conversion to bool not needed here.
>
> Reported-by: Abaci Robot 
> Signed-off-by: Jiapeng Chong 
> ---
>  drivers/gpu/drm/amd/display/dc/dcn30/dcn30_dpp_cm.c | 2 +-
>  1 file changed, 1 insertion(+), 1 deletion(-)
>
> diff --git a/drivers/gpu/drm/amd/display/dc/dcn30/dcn30_dpp_cm.c 
> b/drivers/gpu/drm/amd/display/dc/dcn30/dcn30_dpp_cm.c
> index 3398540..fbefbba 100644
> --- a/drivers/gpu/drm/amd/display/dc/dcn30/dcn30_dpp_cm.c
> +++ b/drivers/gpu/drm/amd/display/dc/dcn30/dcn30_dpp_cm.c
> @@ -295,7 +295,7 @@ bool dpp3_program_gamcor_lut(
> cm_helper_program_gamcor_xfer_func(dpp_base->ctx, params, &gam_regs);
>
> dpp3_program_gammcor_lut(dpp_base, params->rgb_resulted, 
> params->hw_points_num,
> -   next_mode == LUT_RAM_A ? true:false);
> +next_mode == LUT_RAM_A);
>
> //select Gamma LUT to use for next frame
> REG_UPDATE(CM_GAMCOR_CONTROL, CM_GAMCOR_SELECT, next_mode == 
> LUT_RAM_A ? 0:1);
> --
> 1.8.3.1
>
> ___
> dri-devel mailing list
> dri-de...@lists.freedesktop.org
> https://lists.freedesktop.org/mailman/listinfo/dri-devel
___
amd-gfx mailing list
amd-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/amd-gfx


Re: [PATCH] drm/amdgpu: Verify bo size can fit framebuffer size

2021-03-02 Thread Alex Deucher
On Mon, Mar 1, 2021 at 12:11 PM Mark Yacoub  wrote:
>
> When creating a new framebuffer, verify that the bo size associated with
> it can handle the fb size.
> drm_gem_fb_init_with_funcs implements this check by calculating the
> minimum expected size of each plane. amdgpu now uses this function to
> initialize its fb as it performs the required checks.
>
> The bug was caught using igt-gpu-tools test: kms_addfb_basic.too-high
> and kms_addfb_basic.bo-too-small
>
> Suggested-by: Sean Paul 
> Cc: Alex Deucher 
> Cc: amd-gfx@lists.freedesktop.org
> Cc: dri-de...@lists.freedesktop.org
> Signed-off-by: Mark Yacoub 

Applied.  Thanks!

Alex

> ---
>  drivers/gpu/drm/amd/amdgpu/amdgpu_display.c | 8 +---
>  drivers/gpu/drm/amd/amdgpu/amdgpu_fb.c  | 3 ++-
>  drivers/gpu/drm/amd/amdgpu/amdgpu_mode.h| 1 +
>  3 files changed, 8 insertions(+), 4 deletions(-)
>
> diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_display.c 
> b/drivers/gpu/drm/amd/amdgpu/amdgpu_display.c
> index 48cb33e5b3826..61684d543b8ef 100644
> --- a/drivers/gpu/drm/amd/amdgpu/amdgpu_display.c
> +++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_display.c
> @@ -872,13 +872,14 @@ static int amdgpu_display_get_fb_info(const struct 
> amdgpu_framebuffer *amdgpu_fb
>
>  int amdgpu_display_framebuffer_init(struct drm_device *dev,
> struct amdgpu_framebuffer *rfb,
> +   struct drm_file *file,
> const struct drm_mode_fb_cmd2 *mode_cmd,
> struct drm_gem_object *obj)
>  {
> int ret, i;
> rfb->base.obj[0] = obj;
> -   drm_helper_mode_fill_fb_struct(dev, &rfb->base, mode_cmd);
> -   ret = drm_framebuffer_init(dev, &rfb->base, &amdgpu_fb_funcs);
> +   ret = drm_gem_fb_init_with_funcs(dev, &rfb->base, file, mode_cmd,
> +&amdgpu_fb_funcs);
> if (ret)
> goto fail;
>
> @@ -953,7 +954,8 @@ amdgpu_display_user_framebuffer_create(struct drm_device 
> *dev,
> return ERR_PTR(-ENOMEM);
> }
>
> -   ret = amdgpu_display_framebuffer_init(dev, amdgpu_fb, mode_cmd, obj);
> +   ret = amdgpu_display_framebuffer_init(dev, amdgpu_fb, file_priv,
> + mode_cmd, obj);
> if (ret) {
> kfree(amdgpu_fb);
> drm_gem_object_put(obj);
> diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_fb.c 
> b/drivers/gpu/drm/amd/amdgpu/amdgpu_fb.c
> index 0bf7d36c6686d..2b9c9a621c437 100644
> --- a/drivers/gpu/drm/amd/amdgpu/amdgpu_fb.c
> +++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_fb.c
> @@ -233,7 +233,8 @@ static int amdgpufb_create(struct drm_fb_helper *helper,
> }
>
> ret = amdgpu_display_framebuffer_init(adev_to_drm(adev), &rfbdev->rfb,
> - &mode_cmd, gobj);
> + helper->client.file, &mode_cmd,
> + gobj);
> if (ret) {
> DRM_ERROR("failed to initialize framebuffer %d\n", ret);
> goto out;
> diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_mode.h 
> b/drivers/gpu/drm/amd/amdgpu/amdgpu_mode.h
> index 319cb19e1b99f..997b93674955e 100644
> --- a/drivers/gpu/drm/amd/amdgpu/amdgpu_mode.h
> +++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_mode.h
> @@ -604,6 +604,7 @@ int amdgpu_display_get_crtc_scanoutpos(struct drm_device 
> *dev,
>
>  int amdgpu_display_framebuffer_init(struct drm_device *dev,
> struct amdgpu_framebuffer *rfb,
> +   struct drm_file *file,
> const struct drm_mode_fb_cmd2 *mode_cmd,
> struct drm_gem_object *obj);
>
> --
> 2.30.1.766.gb4fecdf3b7-goog
>
> ___
> amd-gfx mailing list
> amd-gfx@lists.freedesktop.org
> https://lists.freedesktop.org/mailman/listinfo/amd-gfx
___
amd-gfx mailing list
amd-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/amd-gfx


Re: [PATCH] drm/radeon: Use kvmalloc for CS chunks

2021-03-02 Thread Alex Deucher
Applied.  Thanks!

Alex

On Tue, Mar 2, 2021 at 9:13 AM Christian König
 wrote:
>
> Am 02.03.21 um 07:42 schrieb Chen Li:
> > The number of chunks/chunks_array may be passed in
> > by userspace and can be large.
>
> I'm wondering if we shouldn't rather restrict the number of chunks.
>
> > It has been observed to cause kcalloc failures from trinity fuzzy test:
> >
> > ```
> >   WARNING: CPU: 0 PID: 5487 at mm/page_alloc.c:4385
> >   __alloc_pages_nodemask+0x2d8/0x14d0
> >
> > ..
> >
> > Trace:
> > __warn.part.4+0x11c/0x174
> > __alloc_pages_nodemask+0x2d8/0x14d0
> > warn_slowpath_null+0x84/0xb0
> > __alloc_pages_nodemask+0x2d8/0x14d0
> > __alloc_pages_nodemask+0x2d8/0x14d0
> > alloc_pages_current+0xf0/0x1b0
> > free_buffer_head+0x88/0xf0
> > jbd2_journal_try_to_free_buffers+0x1e0/0x2a0
> > ext4_releasepage+0x84/0x140
> > release_pages+0x414/0x4c0
> > release_pages+0x42c/0x4c0
> > __find_get_block+0x1a4/0x5b0
> > alloc_pages_current+0xcc/0x1b0
> > kmalloc_order+0x30/0xb0
> > __kmalloc+0x300/0x390
> > kmalloc_order_trace+0x48/0x110
> > __kmalloc+0x300/0x390
> > radeon_cs_parser_init.part.1+0x74/0x670 [radeon]
> > crypto_shash_update+0x5c/0x1c0
> > radeon_cs_parser_init.part.1+0x74/0x670 [radeon]
> > __wake_up_common_lock+0xb8/0x210
> > radeon_cs_ioctl+0xc8/0xb80 [radeon]
> > radeon_cs_ioctl+0x50/0xb80 [radeon]
> > drm_ioctl_kernel+0xf4/0x160
> > radeon_cs_ioctl+0x0/0xb80 [radeon]
> > drm_ioctl_kernel+0xa0/0x160
> > drm_ioctl+0x2dc/0x4f0
> > radeon_drm_ioctl+0x80/0xf0 [radeon]
> > new_sync_write+0x120/0x1c0
> > timerqueue_add+0x88/0x140
> > do_vfs_ioctl+0xe4/0x990
> > ksys_ioctl+0xdc/0x110
> > ksys_ioctl+0x78/0x110
> > sys_ioctl+0x2c/0x50
> > entSys+0xa0/0xc0
>
> Please drop the backtrace, it doesn't add any value to the commit log.
>
> > ```
> >
> > Obviously, the required order in this case is larger than MAX_ORDER.
> > So, just use kvmalloc instead.
> >
> > Signed-off-by: Chen Li 
>
> Reviewed-by: Christian König 
>
> The same patch should probably applied to amdgpu as well if we don't
> already use kvmalloc there as well.
>
> Regards,
> Christian.
>
> > ---
> >   drivers/gpu/drm/radeon/radeon_cs.c | 8 
> >   1 file changed, 4 insertions(+), 4 deletions(-)
> >
> > diff --git a/drivers/gpu/drm/radeon/radeon_cs.c 
> > b/drivers/gpu/drm/radeon/radeon_cs.c
> > index 35e937d39b51..fb736ef9f9aa 100644
> > --- a/drivers/gpu/drm/radeon/radeon_cs.c
> > +++ b/drivers/gpu/drm/radeon/radeon_cs.c
> > @@ -288,7 +288,7 @@ int radeon_cs_parser_init(struct radeon_cs_parser *p, 
> > void *data)
> >   p->chunk_relocs = NULL;
> >   p->chunk_flags = NULL;
> >   p->chunk_const_ib = NULL;
> > - p->chunks_array = kcalloc(cs->num_chunks, sizeof(uint64_t), 
> > GFP_KERNEL);
> > + p->chunks_array = kvmalloc_array(cs->num_chunks, sizeof(uint64_t), 
> > GFP_KERNEL);
> >   if (p->chunks_array == NULL) {
> >   return -ENOMEM;
> >   }
> > @@ -299,7 +299,7 @@ int radeon_cs_parser_init(struct radeon_cs_parser *p, 
> > void *data)
> >   }
> >   p->cs_flags = 0;
> >   p->nchunks = cs->num_chunks;
> > - p->chunks = kcalloc(p->nchunks, sizeof(struct radeon_cs_chunk), 
> > GFP_KERNEL);
> > + p->chunks = kvmalloc_array(p->nchunks, sizeof(struct 
> > radeon_cs_chunk), GFP_KERNEL);
> >   if (p->chunks == NULL) {
> >   return -ENOMEM;
> >   }
> > @@ -452,8 +452,8 @@ static void radeon_cs_parser_fini(struct 
> > radeon_cs_parser *parser, int error, bo
> >   kvfree(parser->vm_bos);
> >   for (i = 0; i < parser->nchunks; i++)
> >   kvfree(parser->chunks[i].kdata);
> > - kfree(parser->chunks);
> > - kfree(parser->chunks_array);
> > + kvfree(parser->chunks);
> > + kvfree(parser->chunks_array);
> >   radeon_ib_free(parser->rdev, &parser->ib);
> >   radeon_ib_free(parser->rdev, &parser->const_ib);
> >   }
> > --
> > 2.30.0
> >
> >
> > ___
> > amd-gfx mailing list
> > amd-gfx@lists.freedesktop.org
> > https://lists.freedesktop.org/mailman/listinfo/amd-gfx
>
> ___
> amd-gfx mailing list
> amd-gfx@lists.freedesktop.org
> https://lists.freedesktop.org/mailman/listinfo/amd-gfx
___
amd-gfx mailing list
amd-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/amd-gfx


Re: [PATCH] drm/amd/display: Fix an uninitialized index variable

2021-03-02 Thread Alex Deucher
On Thu, Feb 25, 2021 at 10:01 AM Arnd Bergmann  wrote:
>
> From: Arnd Bergmann 
>
> clang points out that the new logic uses an always-uninitialized
> array index:
>
> drivers/gpu/drm/amd/amdgpu/../display/amdgpu_dm/amdgpu_dm.c:9810:38: warning: 
> variable 'i' is uninitialized when used here [-Wuninitialized]
> timing  = &edid->detailed_timings[i];
>   ^
> drivers/gpu/drm/amd/amdgpu/../display/amdgpu_dm/amdgpu_dm.c:9720:7: note: 
> initialize the variable 'i' to silence this warning
>
> My best guess is that the index should have been returned by the
> parse_hdmi_amd_vsdb() function that walks an array here, so do that.
>
> Fixes: f9b4f20c4777 ("drm/amd/display: Add Freesync HDMI support to DM")
> Signed-off-by: Arnd Bergmann 

Applied.  Thanks!

Alex


> ---
>  .../gpu/drm/amd/display/amdgpu_dm/amdgpu_dm.c| 16 
>  1 file changed, 8 insertions(+), 8 deletions(-)
>
> diff --git a/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm.c 
> b/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm.c
> index b19b93c74bae..667c0d52dbfa 100644
> --- a/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm.c
> +++ b/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm.c
> @@ -9736,7 +9736,7 @@ static bool parse_edid_cea(struct amdgpu_dm_connector 
> *aconnector,
> return false;
>  }
>
> -static bool parse_hdmi_amd_vsdb(struct amdgpu_dm_connector *aconnector,
> +static int parse_hdmi_amd_vsdb(struct amdgpu_dm_connector *aconnector,
> struct edid *edid, struct amdgpu_hdmi_vsdb_info *vsdb_info)
>  {
> uint8_t *edid_ext = NULL;
> @@ -9746,7 +9746,7 @@ static bool parse_hdmi_amd_vsdb(struct 
> amdgpu_dm_connector *aconnector,
> /*- drm_find_cea_extension() -*/
> /* No EDID or EDID extensions */
> if (edid == NULL || edid->extensions == 0)
> -   return false;
> +   return -ENODEV;
>
> /* Find CEA extension */
> for (i = 0; i < edid->extensions; i++) {
> @@ -9756,14 +9756,15 @@ static bool parse_hdmi_amd_vsdb(struct 
> amdgpu_dm_connector *aconnector,
> }
>
> if (i == edid->extensions)
> -   return false;
> +   return -ENODEV;
>
> /*- cea_db_offsets() -*/
> if (edid_ext[0] != CEA_EXT)
> -   return false;
> +   return -ENODEV;
>
> valid_vsdb_found = parse_edid_cea(aconnector, edid_ext, EDID_LENGTH, 
> vsdb_info);
> -   return valid_vsdb_found;
> +
> +   return valid_vsdb_found ? i : -ENODEV;
>  }
>
>  void amdgpu_dm_update_freesync_caps(struct drm_connector *connector,
> @@ -9781,7 +9782,6 @@ void amdgpu_dm_update_freesync_caps(struct 
> drm_connector *connector,
> struct amdgpu_device *adev = drm_to_adev(dev);
> bool freesync_capable = false;
> struct amdgpu_hdmi_vsdb_info vsdb_info = {0};
> -   bool hdmi_valid_vsdb_found = false;
>
> if (!connector->state) {
> DRM_ERROR("%s - Connector has no state", __func__);
> @@ -9857,8 +9857,8 @@ void amdgpu_dm_update_freesync_caps(struct 
> drm_connector *connector,
> }
> }
> } else if (edid && amdgpu_dm_connector->dc_sink->sink_signal == 
> SIGNAL_TYPE_HDMI_TYPE_A) {
> -   hdmi_valid_vsdb_found = 
> parse_hdmi_amd_vsdb(amdgpu_dm_connector, edid, &vsdb_info);
> -   if (hdmi_valid_vsdb_found && vsdb_info.freesync_supported) {
> +   i = parse_hdmi_amd_vsdb(amdgpu_dm_connector, edid, 
> &vsdb_info);
> +   if (i >= 0 && vsdb_info.freesync_supported) {
> timing  = &edid->detailed_timings[i];
> data= &timing->data.other_data;
>
> --
> 2.29.2
>
> ___
> dri-devel mailing list
> dri-de...@lists.freedesktop.org
> https://lists.freedesktop.org/mailman/listinfo/dri-devel
___
amd-gfx mailing list
amd-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/amd-gfx


Re: [PATCH][next] drm/amd/display: fix the return of the uninitialized value in ret

2021-03-02 Thread Alex Deucher
Applied.  Thanks!

Alex

On Tue, Mar 2, 2021 at 10:03 AM Harry Wentland  wrote:
>
> On 2021-03-02 9:05 a.m., Colin King wrote:
> > From: Colin Ian King 
> >
> > Currently if stream->signal is neither SIGNAL_TYPE_DISPLAY_PORT_MST or
> > SIGNAL_TYPE_DISPLAY_PORT then variable ret is uninitialized and this is
> > checked for > 0 at the end of the function.  Ret should be initialized,
> > I believe setting it to zero is a correct default.
> >
> > Addresses-Coverity: ("Uninitialized scalar variable")
> > Fixes: bd0c064c161c ("drm/amd/display: Add return code instead of boolean 
> > for future use")
> > Signed-off-by: Colin Ian King 
>
> Reviewed-by: Harry Wentland 
>
> Harry
>
> > ---
> >   drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm_helpers.c | 2 +-
> >   1 file changed, 1 insertion(+), 1 deletion(-)
> >
> > diff --git a/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm_helpers.c 
> > b/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm_helpers.c
> > index 5159399f8239..5750818db8f6 100644
> > --- a/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm_helpers.c
> > +++ b/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm_helpers.c
> > @@ -530,7 +530,7 @@ bool dm_helpers_dp_write_dsc_enable(
> >   {
> >   uint8_t enable_dsc = enable ? 1 : 0;
> >   struct amdgpu_dm_connector *aconnector;
> > - uint8_t ret;
> > + uint8_t ret = 0;
> >
> >   if (!stream)
> >   return false;
> >
> ___
> dri-devel mailing list
> dri-de...@lists.freedesktop.org
> https://lists.freedesktop.org/mailman/listinfo/dri-devel
___
amd-gfx mailing list
amd-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/amd-gfx


Re: [PATCH][next] drm/amd/display: fix the return of the uninitialized value in ret

2021-03-02 Thread Harry Wentland

On 2021-03-02 9:05 a.m., Colin King wrote:

From: Colin Ian King 

Currently if stream->signal is neither SIGNAL_TYPE_DISPLAY_PORT_MST or
SIGNAL_TYPE_DISPLAY_PORT then variable ret is uninitialized and this is
checked for > 0 at the end of the function.  Ret should be initialized,
I believe setting it to zero is a correct default.

Addresses-Coverity: ("Uninitialized scalar variable")
Fixes: bd0c064c161c ("drm/amd/display: Add return code instead of boolean for future 
use")
Signed-off-by: Colin Ian King 


Reviewed-by: Harry Wentland 

Harry


---
  drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm_helpers.c | 2 +-
  1 file changed, 1 insertion(+), 1 deletion(-)

diff --git a/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm_helpers.c 
b/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm_helpers.c
index 5159399f8239..5750818db8f6 100644
--- a/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm_helpers.c
+++ b/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm_helpers.c
@@ -530,7 +530,7 @@ bool dm_helpers_dp_write_dsc_enable(
  {
uint8_t enable_dsc = enable ? 1 : 0;
struct amdgpu_dm_connector *aconnector;
-   uint8_t ret;
+   uint8_t ret = 0;
  
  	if (!stream)

return false;


___
amd-gfx mailing list
amd-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/amd-gfx


Re: [PATCH] drm/radeon: Use kvmalloc for CS chunks

2021-03-02 Thread Christian König

Am 02.03.21 um 07:42 schrieb Chen Li:

The number of chunks/chunks_array may be passed in
by userspace and can be large.


I'm wondering if we shouldn't rather restrict the number of chunks.


It has been observed to cause kcalloc failures from trinity fuzzy test:

```
  WARNING: CPU: 0 PID: 5487 at mm/page_alloc.c:4385
  __alloc_pages_nodemask+0x2d8/0x14d0

..

Trace:
__warn.part.4+0x11c/0x174
__alloc_pages_nodemask+0x2d8/0x14d0
warn_slowpath_null+0x84/0xb0
__alloc_pages_nodemask+0x2d8/0x14d0
__alloc_pages_nodemask+0x2d8/0x14d0
alloc_pages_current+0xf0/0x1b0
free_buffer_head+0x88/0xf0
jbd2_journal_try_to_free_buffers+0x1e0/0x2a0
ext4_releasepage+0x84/0x140
release_pages+0x414/0x4c0
release_pages+0x42c/0x4c0
__find_get_block+0x1a4/0x5b0
alloc_pages_current+0xcc/0x1b0
kmalloc_order+0x30/0xb0
__kmalloc+0x300/0x390
kmalloc_order_trace+0x48/0x110
__kmalloc+0x300/0x390
radeon_cs_parser_init.part.1+0x74/0x670 [radeon]
crypto_shash_update+0x5c/0x1c0
radeon_cs_parser_init.part.1+0x74/0x670 [radeon]
__wake_up_common_lock+0xb8/0x210
radeon_cs_ioctl+0xc8/0xb80 [radeon]
radeon_cs_ioctl+0x50/0xb80 [radeon]
drm_ioctl_kernel+0xf4/0x160
radeon_cs_ioctl+0x0/0xb80 [radeon]
drm_ioctl_kernel+0xa0/0x160
drm_ioctl+0x2dc/0x4f0
radeon_drm_ioctl+0x80/0xf0 [radeon]
new_sync_write+0x120/0x1c0
timerqueue_add+0x88/0x140
do_vfs_ioctl+0xe4/0x990
ksys_ioctl+0xdc/0x110
ksys_ioctl+0x78/0x110
sys_ioctl+0x2c/0x50
entSys+0xa0/0xc0


Please drop the backtrace, it doesn't add any value to the commit log.


```

Obviously, the required order in this case is larger than MAX_ORDER.
So, just use kvmalloc instead.

Signed-off-by: Chen Li 


Reviewed-by: Christian König 

The same patch should probably applied to amdgpu as well if we don't 
already use kvmalloc there as well.


Regards,
Christian.


---
  drivers/gpu/drm/radeon/radeon_cs.c | 8 
  1 file changed, 4 insertions(+), 4 deletions(-)

diff --git a/drivers/gpu/drm/radeon/radeon_cs.c 
b/drivers/gpu/drm/radeon/radeon_cs.c
index 35e937d39b51..fb736ef9f9aa 100644
--- a/drivers/gpu/drm/radeon/radeon_cs.c
+++ b/drivers/gpu/drm/radeon/radeon_cs.c
@@ -288,7 +288,7 @@ int radeon_cs_parser_init(struct radeon_cs_parser *p, void 
*data)
p->chunk_relocs = NULL;
p->chunk_flags = NULL;
p->chunk_const_ib = NULL;
-   p->chunks_array = kcalloc(cs->num_chunks, sizeof(uint64_t), GFP_KERNEL);
+   p->chunks_array = kvmalloc_array(cs->num_chunks, sizeof(uint64_t), 
GFP_KERNEL);
if (p->chunks_array == NULL) {
return -ENOMEM;
}
@@ -299,7 +299,7 @@ int radeon_cs_parser_init(struct radeon_cs_parser *p, void 
*data)
}
p->cs_flags = 0;
p->nchunks = cs->num_chunks;
-   p->chunks = kcalloc(p->nchunks, sizeof(struct radeon_cs_chunk), 
GFP_KERNEL);
+   p->chunks = kvmalloc_array(p->nchunks, sizeof(struct radeon_cs_chunk), 
GFP_KERNEL);
if (p->chunks == NULL) {
return -ENOMEM;
}
@@ -452,8 +452,8 @@ static void radeon_cs_parser_fini(struct radeon_cs_parser 
*parser, int error, bo
kvfree(parser->vm_bos);
for (i = 0; i < parser->nchunks; i++)
kvfree(parser->chunks[i].kdata);
-   kfree(parser->chunks);
-   kfree(parser->chunks_array);
+   kvfree(parser->chunks);
+   kvfree(parser->chunks_array);
radeon_ib_free(parser->rdev, &parser->ib);
radeon_ib_free(parser->rdev, &parser->const_ib);
  }
--
2.30.0


___
amd-gfx mailing list
amd-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/amd-gfx


___
amd-gfx mailing list
amd-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/amd-gfx


[PATCH] drm/radeon: Use kvmalloc for CS chunks

2021-03-02 Thread Chen Li
The number of chunks/chunks_array may be passed in
by userspace and can be large.

It has been observed to cause kcalloc failures from trinity fuzzy test:

```
 WARNING: CPU: 0 PID: 5487 at mm/page_alloc.c:4385
 __alloc_pages_nodemask+0x2d8/0x14d0

..

Trace:
__warn.part.4+0x11c/0x174
__alloc_pages_nodemask+0x2d8/0x14d0
warn_slowpath_null+0x84/0xb0
__alloc_pages_nodemask+0x2d8/0x14d0
__alloc_pages_nodemask+0x2d8/0x14d0
alloc_pages_current+0xf0/0x1b0
free_buffer_head+0x88/0xf0
jbd2_journal_try_to_free_buffers+0x1e0/0x2a0
ext4_releasepage+0x84/0x140
release_pages+0x414/0x4c0
release_pages+0x42c/0x4c0
__find_get_block+0x1a4/0x5b0
alloc_pages_current+0xcc/0x1b0
kmalloc_order+0x30/0xb0
__kmalloc+0x300/0x390
kmalloc_order_trace+0x48/0x110
__kmalloc+0x300/0x390
radeon_cs_parser_init.part.1+0x74/0x670 [radeon]
crypto_shash_update+0x5c/0x1c0
radeon_cs_parser_init.part.1+0x74/0x670 [radeon]
__wake_up_common_lock+0xb8/0x210
radeon_cs_ioctl+0xc8/0xb80 [radeon]
radeon_cs_ioctl+0x50/0xb80 [radeon]
drm_ioctl_kernel+0xf4/0x160
radeon_cs_ioctl+0x0/0xb80 [radeon]
drm_ioctl_kernel+0xa0/0x160
drm_ioctl+0x2dc/0x4f0
radeon_drm_ioctl+0x80/0xf0 [radeon]
new_sync_write+0x120/0x1c0
timerqueue_add+0x88/0x140
do_vfs_ioctl+0xe4/0x990
ksys_ioctl+0xdc/0x110
ksys_ioctl+0x78/0x110
sys_ioctl+0x2c/0x50
entSys+0xa0/0xc0
```

Obviously, the required order in this case is larger than MAX_ORDER.
So, just use kvmalloc instead.

Signed-off-by: Chen Li 
---
 drivers/gpu/drm/radeon/radeon_cs.c | 8 
 1 file changed, 4 insertions(+), 4 deletions(-)

diff --git a/drivers/gpu/drm/radeon/radeon_cs.c 
b/drivers/gpu/drm/radeon/radeon_cs.c
index 35e937d39b51..fb736ef9f9aa 100644
--- a/drivers/gpu/drm/radeon/radeon_cs.c
+++ b/drivers/gpu/drm/radeon/radeon_cs.c
@@ -288,7 +288,7 @@ int radeon_cs_parser_init(struct radeon_cs_parser *p, void 
*data)
p->chunk_relocs = NULL;
p->chunk_flags = NULL;
p->chunk_const_ib = NULL;
-   p->chunks_array = kcalloc(cs->num_chunks, sizeof(uint64_t), GFP_KERNEL);
+   p->chunks_array = kvmalloc_array(cs->num_chunks, sizeof(uint64_t), 
GFP_KERNEL);
if (p->chunks_array == NULL) {
return -ENOMEM;
}
@@ -299,7 +299,7 @@ int radeon_cs_parser_init(struct radeon_cs_parser *p, void 
*data)
}
p->cs_flags = 0;
p->nchunks = cs->num_chunks;
-   p->chunks = kcalloc(p->nchunks, sizeof(struct radeon_cs_chunk), 
GFP_KERNEL);
+   p->chunks = kvmalloc_array(p->nchunks, sizeof(struct radeon_cs_chunk), 
GFP_KERNEL);
if (p->chunks == NULL) {
return -ENOMEM;
}
@@ -452,8 +452,8 @@ static void radeon_cs_parser_fini(struct radeon_cs_parser 
*parser, int error, bo
kvfree(parser->vm_bos);
for (i = 0; i < parser->nchunks; i++)
kvfree(parser->chunks[i].kdata);
-   kfree(parser->chunks);
-   kfree(parser->chunks_array);
+   kvfree(parser->chunks);
+   kvfree(parser->chunks_array);
radeon_ib_free(parser->rdev, &parser->ib);
radeon_ib_free(parser->rdev, &parser->const_ib);
 }
--
2.30.0


___
amd-gfx mailing list
amd-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/amd-gfx


[PATCH][next] drm/amd/display: fix the return of the uninitialized value in ret

2021-03-02 Thread Colin King
From: Colin Ian King 

Currently if stream->signal is neither SIGNAL_TYPE_DISPLAY_PORT_MST or
SIGNAL_TYPE_DISPLAY_PORT then variable ret is uninitialized and this is
checked for > 0 at the end of the function.  Ret should be initialized,
I believe setting it to zero is a correct default.

Addresses-Coverity: ("Uninitialized scalar variable")
Fixes: bd0c064c161c ("drm/amd/display: Add return code instead of boolean for 
future use")
Signed-off-by: Colin Ian King 
---
 drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm_helpers.c | 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)

diff --git a/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm_helpers.c 
b/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm_helpers.c
index 5159399f8239..5750818db8f6 100644
--- a/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm_helpers.c
+++ b/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm_helpers.c
@@ -530,7 +530,7 @@ bool dm_helpers_dp_write_dsc_enable(
 {
uint8_t enable_dsc = enable ? 1 : 0;
struct amdgpu_dm_connector *aconnector;
-   uint8_t ret;
+   uint8_t ret = 0;
 
if (!stream)
return false;
-- 
2.30.0

___
amd-gfx mailing list
amd-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/amd-gfx


Re: [PATCH 2/3] drm/amdgpu: introduce kfd user flag for amdgpu_bo

2021-03-02 Thread Nirmoy


On 3/2/21 2:06 PM, Christian König wrote:

Am 02.03.21 um 14:01 schrieb Nirmoy:


On 3/2/21 1:40 PM, Christian König wrote:



Am 02.03.21 um 12:33 schrieb Nirmoy Das:

Introduce a new flag for amdgpu_bo->flags to identify if
a BO is created by KFD.

Signed-off-by: Nirmoy Das 
---
  .../gpu/drm/amd/amdgpu/amdgpu_amdkfd_gpuvm.c  |  2 +-
  drivers/gpu/drm/amd/amdgpu/amdgpu_gem.c   |  3 +-
  drivers/gpu/drm/amd/amdgpu/amdgpu_object.c    | 48 
++-

  drivers/gpu/drm/amd/amdgpu/amdgpu_object.h    |  3 ++
  drivers/gpu/drm/amd/amdgpu/amdgpu_ttm.c   |  2 +-
  include/uapi/drm/amdgpu_drm.h |  5 ++
  6 files changed, 59 insertions(+), 4 deletions(-)

diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_amdkfd_gpuvm.c 
b/drivers/gpu/drm/amd/amdgpu/amdgpu_amdkfd_gpuvm.c

index 89d0e4f7c6a8..57798707cd5f 100644
--- a/drivers/gpu/drm/amd/amdgpu/amdgpu_amdkfd_gpuvm.c
+++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_amdkfd_gpuvm.c
@@ -1227,7 +1227,7 @@ int amdgpu_amdkfd_gpuvm_alloc_memory_of_gpu(
  bp.flags = alloc_flags;
  bp.type = bo_type;
  bp.resv = NULL;
-    ret = amdgpu_bo_create(adev, &bp, &bo);
+    ret = amdgpu_kfd_bo_create(adev, &bp, &bo);
  if (ret) {
  pr_debug("Failed to create BO on domain %s. ret %d\n",
  domain_string(alloc_domain), ret);
diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_gem.c 
b/drivers/gpu/drm/amd/amdgpu/amdgpu_gem.c

index 8e9b8a6e6ef0..97d19f6b572d 100644
--- a/drivers/gpu/drm/amd/amdgpu/amdgpu_gem.c
+++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_gem.c
@@ -234,7 +234,8 @@ int amdgpu_gem_create_ioctl(struct drm_device 
*dev, void *data,

    AMDGPU_GEM_CREATE_VRAM_CLEARED |
    AMDGPU_GEM_CREATE_VM_ALWAYS_VALID |
    AMDGPU_GEM_CREATE_EXPLICIT_SYNC |
-  AMDGPU_GEM_CREATE_ENCRYPTED))
+  AMDGPU_GEM_CREATE_ENCRYPTED |
+  AMDGPU_GEM_USER_KFD))


Please stick with the naming here. And why _USER_KFD and not just _KFD?


Ok, I will rename it to AMDGPU_GEM_KFD which sounds much better.


No. When you want to use those flags you should probably call this 
AMDGPU_GEM_CREATE_KFD.



Ah I see.


Nirmoy




Christian.




Thanks,

Nirmoy




Christian.


    return -EINVAL;
  diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_object.c 
b/drivers/gpu/drm/amd/amdgpu/amdgpu_object.c

index 0bd22ed1dacf..5ebce6d6784a 100644
--- a/drivers/gpu/drm/amd/amdgpu/amdgpu_object.c
+++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_object.c
@@ -697,6 +697,52 @@ int amdgpu_bo_create(struct amdgpu_device *adev,
  return r;
  }
  +/**
+ * amdgpu_kfd_bo_create - create an &amdgpu_bo buffer object with 
kfd user flag

+ * @adev: amdgpu device object
+ * @bp: parameters to be used for the buffer object
+ * @bo_ptr: pointer to the buffer object pointer
+ *
+ * Creates an &amdgpu_bo buffer object; and if requested, also 
creates a

+ * shadow object.
+ * Shadow object is used to backup the original buffer object, and 
is always

+ * in GTT.
+ *
+ * Returns:
+ * 0 for success or a negative error code on failure.
+ */
+
+int amdgpu_kfd_bo_create(struct amdgpu_device *adev,
+ struct amdgpu_bo_param *bp,
+ struct amdgpu_bo **bo_ptr)
+{
+    u64 flags = bp->flags;
+    int r;
+
+    bp->flags = bp->flags & ~AMDGPU_GEM_CREATE_SHADOW;
+    bp->flags = bp->flags | AMDGPU_GEM_USER_KFD;
+    r = amdgpu_bo_do_create(adev, bp, bo_ptr);
+    if (r)
+    return r;
+
+    if ((flags & AMDGPU_GEM_CREATE_SHADOW) && !(adev->flags & 
AMD_IS_APU)) {

+    if (!bp->resv)
+ WARN_ON(dma_resv_lock((*bo_ptr)->tbo.base.resv,
+    NULL));
+
+    r = amdgpu_bo_create_shadow(adev, bp->size, *bo_ptr);
+
+    if (!bp->resv)
+    dma_resv_unlock((*bo_ptr)->tbo.base.resv);
+
+    if (r)
+    amdgpu_bo_unref(bo_ptr);
+    }
+
+    return r;
+}
+
+
  /**
   * amdgpu_bo_validate - validate an &amdgpu_bo buffer object
   * @bo: pointer to the buffer object
@@ -1309,7 +1355,7 @@ void amdgpu_bo_release_notify(struct 
ttm_buffer_object *bo)

    abo = ttm_to_amdgpu_bo(bo);
  -    if (abo->kfd_bo)
+    if (abo->flags & AMDGPU_GEM_USER_KFD)
  amdgpu_amdkfd_unreserve_memory_limit(abo);
    /* We only remove the fence if the resv has individualized. */
diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_object.h 
b/drivers/gpu/drm/amd/amdgpu/amdgpu_object.h

index 8cd96c9330dd..665ee0015f06 100644
--- a/drivers/gpu/drm/amd/amdgpu/amdgpu_object.h
+++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_object.h
@@ -245,6 +245,9 @@ void amdgpu_bo_placement_from_domain(struct 
amdgpu_bo *abo, u32 domain);

  int amdgpu_bo_create(struct amdgpu_device *adev,
   struct amdgpu_bo_param *bp,
   struct amdgpu_bo **bo_ptr);
+int amdgpu_kfd_bo_create(struct amdgpu_device *adev,
+ struct amdgpu_bo_param *bp,
+ struct amdgpu_bo **bo_ptr);
  int amdgpu_bo_create_reserved(struct amdgpu_device *adev,
    unsigned

Re: [PATCH 2/3] drm/amdgpu: introduce kfd user flag for amdgpu_bo

2021-03-02 Thread Christian König

Am 02.03.21 um 14:01 schrieb Nirmoy:


On 3/2/21 1:40 PM, Christian König wrote:



Am 02.03.21 um 12:33 schrieb Nirmoy Das:

Introduce a new flag for amdgpu_bo->flags to identify if
a BO is created by KFD.

Signed-off-by: Nirmoy Das 
---
  .../gpu/drm/amd/amdgpu/amdgpu_amdkfd_gpuvm.c  |  2 +-
  drivers/gpu/drm/amd/amdgpu/amdgpu_gem.c   |  3 +-
  drivers/gpu/drm/amd/amdgpu/amdgpu_object.c    | 48 
++-

  drivers/gpu/drm/amd/amdgpu/amdgpu_object.h    |  3 ++
  drivers/gpu/drm/amd/amdgpu/amdgpu_ttm.c   |  2 +-
  include/uapi/drm/amdgpu_drm.h |  5 ++
  6 files changed, 59 insertions(+), 4 deletions(-)

diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_amdkfd_gpuvm.c 
b/drivers/gpu/drm/amd/amdgpu/amdgpu_amdkfd_gpuvm.c

index 89d0e4f7c6a8..57798707cd5f 100644
--- a/drivers/gpu/drm/amd/amdgpu/amdgpu_amdkfd_gpuvm.c
+++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_amdkfd_gpuvm.c
@@ -1227,7 +1227,7 @@ int amdgpu_amdkfd_gpuvm_alloc_memory_of_gpu(
  bp.flags = alloc_flags;
  bp.type = bo_type;
  bp.resv = NULL;
-    ret = amdgpu_bo_create(adev, &bp, &bo);
+    ret = amdgpu_kfd_bo_create(adev, &bp, &bo);
  if (ret) {
  pr_debug("Failed to create BO on domain %s. ret %d\n",
  domain_string(alloc_domain), ret);
diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_gem.c 
b/drivers/gpu/drm/amd/amdgpu/amdgpu_gem.c

index 8e9b8a6e6ef0..97d19f6b572d 100644
--- a/drivers/gpu/drm/amd/amdgpu/amdgpu_gem.c
+++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_gem.c
@@ -234,7 +234,8 @@ int amdgpu_gem_create_ioctl(struct drm_device 
*dev, void *data,

    AMDGPU_GEM_CREATE_VRAM_CLEARED |
    AMDGPU_GEM_CREATE_VM_ALWAYS_VALID |
    AMDGPU_GEM_CREATE_EXPLICIT_SYNC |
-  AMDGPU_GEM_CREATE_ENCRYPTED))
+  AMDGPU_GEM_CREATE_ENCRYPTED |
+  AMDGPU_GEM_USER_KFD))


Please stick with the naming here. And why _USER_KFD and not just _KFD?


Ok, I will rename it to AMDGPU_GEM_KFD which sounds much better.


No. When you want to use those flags you should probably call this 
AMDGPU_GEM_CREATE_KFD.


Christian.




Thanks,

Nirmoy




Christian.


    return -EINVAL;
  diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_object.c 
b/drivers/gpu/drm/amd/amdgpu/amdgpu_object.c

index 0bd22ed1dacf..5ebce6d6784a 100644
--- a/drivers/gpu/drm/amd/amdgpu/amdgpu_object.c
+++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_object.c
@@ -697,6 +697,52 @@ int amdgpu_bo_create(struct amdgpu_device *adev,
  return r;
  }
  +/**
+ * amdgpu_kfd_bo_create - create an &amdgpu_bo buffer object with 
kfd user flag

+ * @adev: amdgpu device object
+ * @bp: parameters to be used for the buffer object
+ * @bo_ptr: pointer to the buffer object pointer
+ *
+ * Creates an &amdgpu_bo buffer object; and if requested, also 
creates a

+ * shadow object.
+ * Shadow object is used to backup the original buffer object, and 
is always

+ * in GTT.
+ *
+ * Returns:
+ * 0 for success or a negative error code on failure.
+ */
+
+int amdgpu_kfd_bo_create(struct amdgpu_device *adev,
+ struct amdgpu_bo_param *bp,
+ struct amdgpu_bo **bo_ptr)
+{
+    u64 flags = bp->flags;
+    int r;
+
+    bp->flags = bp->flags & ~AMDGPU_GEM_CREATE_SHADOW;
+    bp->flags = bp->flags | AMDGPU_GEM_USER_KFD;
+    r = amdgpu_bo_do_create(adev, bp, bo_ptr);
+    if (r)
+    return r;
+
+    if ((flags & AMDGPU_GEM_CREATE_SHADOW) && !(adev->flags & 
AMD_IS_APU)) {

+    if (!bp->resv)
+ WARN_ON(dma_resv_lock((*bo_ptr)->tbo.base.resv,
+    NULL));
+
+    r = amdgpu_bo_create_shadow(adev, bp->size, *bo_ptr);
+
+    if (!bp->resv)
+    dma_resv_unlock((*bo_ptr)->tbo.base.resv);
+
+    if (r)
+    amdgpu_bo_unref(bo_ptr);
+    }
+
+    return r;
+}
+
+
  /**
   * amdgpu_bo_validate - validate an &amdgpu_bo buffer object
   * @bo: pointer to the buffer object
@@ -1309,7 +1355,7 @@ void amdgpu_bo_release_notify(struct 
ttm_buffer_object *bo)

    abo = ttm_to_amdgpu_bo(bo);
  -    if (abo->kfd_bo)
+    if (abo->flags & AMDGPU_GEM_USER_KFD)
  amdgpu_amdkfd_unreserve_memory_limit(abo);
    /* We only remove the fence if the resv has individualized. */
diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_object.h 
b/drivers/gpu/drm/amd/amdgpu/amdgpu_object.h

index 8cd96c9330dd..665ee0015f06 100644
--- a/drivers/gpu/drm/amd/amdgpu/amdgpu_object.h
+++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_object.h
@@ -245,6 +245,9 @@ void amdgpu_bo_placement_from_domain(struct 
amdgpu_bo *abo, u32 domain);

  int amdgpu_bo_create(struct amdgpu_device *adev,
   struct amdgpu_bo_param *bp,
   struct amdgpu_bo **bo_ptr);
+int amdgpu_kfd_bo_create(struct amdgpu_device *adev,
+ struct amdgpu_bo_param *bp,
+ struct amdgpu_bo **bo_ptr);
  int amdgpu_bo_create_reserved(struct amdgpu_device *adev,
    unsigned long size, int align,
    u32 domain, struct amdgpu

Re: [PATCH 2/3] drm/amdgpu: introduce kfd user flag for amdgpu_bo

2021-03-02 Thread Nirmoy


On 3/2/21 1:40 PM, Christian König wrote:



Am 02.03.21 um 12:33 schrieb Nirmoy Das:

Introduce a new flag for amdgpu_bo->flags to identify if
a BO is created by KFD.

Signed-off-by: Nirmoy Das 
---
  .../gpu/drm/amd/amdgpu/amdgpu_amdkfd_gpuvm.c  |  2 +-
  drivers/gpu/drm/amd/amdgpu/amdgpu_gem.c   |  3 +-
  drivers/gpu/drm/amd/amdgpu/amdgpu_object.c    | 48 ++-
  drivers/gpu/drm/amd/amdgpu/amdgpu_object.h    |  3 ++
  drivers/gpu/drm/amd/amdgpu/amdgpu_ttm.c   |  2 +-
  include/uapi/drm/amdgpu_drm.h |  5 ++
  6 files changed, 59 insertions(+), 4 deletions(-)

diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_amdkfd_gpuvm.c 
b/drivers/gpu/drm/amd/amdgpu/amdgpu_amdkfd_gpuvm.c

index 89d0e4f7c6a8..57798707cd5f 100644
--- a/drivers/gpu/drm/amd/amdgpu/amdgpu_amdkfd_gpuvm.c
+++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_amdkfd_gpuvm.c
@@ -1227,7 +1227,7 @@ int amdgpu_amdkfd_gpuvm_alloc_memory_of_gpu(
  bp.flags = alloc_flags;
  bp.type = bo_type;
  bp.resv = NULL;
-    ret = amdgpu_bo_create(adev, &bp, &bo);
+    ret = amdgpu_kfd_bo_create(adev, &bp, &bo);
  if (ret) {
  pr_debug("Failed to create BO on domain %s. ret %d\n",
  domain_string(alloc_domain), ret);
diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_gem.c 
b/drivers/gpu/drm/amd/amdgpu/amdgpu_gem.c

index 8e9b8a6e6ef0..97d19f6b572d 100644
--- a/drivers/gpu/drm/amd/amdgpu/amdgpu_gem.c
+++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_gem.c
@@ -234,7 +234,8 @@ int amdgpu_gem_create_ioctl(struct drm_device 
*dev, void *data,

    AMDGPU_GEM_CREATE_VRAM_CLEARED |
    AMDGPU_GEM_CREATE_VM_ALWAYS_VALID |
    AMDGPU_GEM_CREATE_EXPLICIT_SYNC |
-  AMDGPU_GEM_CREATE_ENCRYPTED))
+  AMDGPU_GEM_CREATE_ENCRYPTED |
+  AMDGPU_GEM_USER_KFD))


Please stick with the naming here. And why _USER_KFD and not just _KFD?


Ok, I will rename it to AMDGPU_GEM_KFD which sounds much better.


Thanks,

Nirmoy




Christian.


    return -EINVAL;
  diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_object.c 
b/drivers/gpu/drm/amd/amdgpu/amdgpu_object.c

index 0bd22ed1dacf..5ebce6d6784a 100644
--- a/drivers/gpu/drm/amd/amdgpu/amdgpu_object.c
+++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_object.c
@@ -697,6 +697,52 @@ int amdgpu_bo_create(struct amdgpu_device *adev,
  return r;
  }
  +/**
+ * amdgpu_kfd_bo_create - create an &amdgpu_bo buffer object with 
kfd user flag

+ * @adev: amdgpu device object
+ * @bp: parameters to be used for the buffer object
+ * @bo_ptr: pointer to the buffer object pointer
+ *
+ * Creates an &amdgpu_bo buffer object; and if requested, also 
creates a

+ * shadow object.
+ * Shadow object is used to backup the original buffer object, and 
is always

+ * in GTT.
+ *
+ * Returns:
+ * 0 for success or a negative error code on failure.
+ */
+
+int amdgpu_kfd_bo_create(struct amdgpu_device *adev,
+ struct amdgpu_bo_param *bp,
+ struct amdgpu_bo **bo_ptr)
+{
+    u64 flags = bp->flags;
+    int r;
+
+    bp->flags = bp->flags & ~AMDGPU_GEM_CREATE_SHADOW;
+    bp->flags = bp->flags | AMDGPU_GEM_USER_KFD;
+    r = amdgpu_bo_do_create(adev, bp, bo_ptr);
+    if (r)
+    return r;
+
+    if ((flags & AMDGPU_GEM_CREATE_SHADOW) && !(adev->flags & 
AMD_IS_APU)) {

+    if (!bp->resv)
+    WARN_ON(dma_resv_lock((*bo_ptr)->tbo.base.resv,
+    NULL));
+
+    r = amdgpu_bo_create_shadow(adev, bp->size, *bo_ptr);
+
+    if (!bp->resv)
+    dma_resv_unlock((*bo_ptr)->tbo.base.resv);
+
+    if (r)
+    amdgpu_bo_unref(bo_ptr);
+    }
+
+    return r;
+}
+
+
  /**
   * amdgpu_bo_validate - validate an &amdgpu_bo buffer object
   * @bo: pointer to the buffer object
@@ -1309,7 +1355,7 @@ void amdgpu_bo_release_notify(struct 
ttm_buffer_object *bo)

    abo = ttm_to_amdgpu_bo(bo);
  -    if (abo->kfd_bo)
+    if (abo->flags & AMDGPU_GEM_USER_KFD)
  amdgpu_amdkfd_unreserve_memory_limit(abo);
    /* We only remove the fence if the resv has individualized. */
diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_object.h 
b/drivers/gpu/drm/amd/amdgpu/amdgpu_object.h

index 8cd96c9330dd..665ee0015f06 100644
--- a/drivers/gpu/drm/amd/amdgpu/amdgpu_object.h
+++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_object.h
@@ -245,6 +245,9 @@ void amdgpu_bo_placement_from_domain(struct 
amdgpu_bo *abo, u32 domain);

  int amdgpu_bo_create(struct amdgpu_device *adev,
   struct amdgpu_bo_param *bp,
   struct amdgpu_bo **bo_ptr);
+int amdgpu_kfd_bo_create(struct amdgpu_device *adev,
+ struct amdgpu_bo_param *bp,
+ struct amdgpu_bo **bo_ptr);
  int amdgpu_bo_create_reserved(struct amdgpu_device *adev,
    unsigned long size, int align,
    u32 domain, struct amdgpu_bo **bo_ptr,
diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_ttm.c 
b/drivers/gpu/drm/amd/amdgpu/amdgpu_ttm.c

index 7b2db779f313..d36b19

Re: [PATCH 2/3] drm/amdgpu: introduce kfd user flag for amdgpu_bo

2021-03-02 Thread Christian König




Am 02.03.21 um 12:33 schrieb Nirmoy Das:

Introduce a new flag for amdgpu_bo->flags to identify if
a BO is created by KFD.

Signed-off-by: Nirmoy Das 
---
  .../gpu/drm/amd/amdgpu/amdgpu_amdkfd_gpuvm.c  |  2 +-
  drivers/gpu/drm/amd/amdgpu/amdgpu_gem.c   |  3 +-
  drivers/gpu/drm/amd/amdgpu/amdgpu_object.c| 48 ++-
  drivers/gpu/drm/amd/amdgpu/amdgpu_object.h|  3 ++
  drivers/gpu/drm/amd/amdgpu/amdgpu_ttm.c   |  2 +-
  include/uapi/drm/amdgpu_drm.h |  5 ++
  6 files changed, 59 insertions(+), 4 deletions(-)

diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_amdkfd_gpuvm.c 
b/drivers/gpu/drm/amd/amdgpu/amdgpu_amdkfd_gpuvm.c
index 89d0e4f7c6a8..57798707cd5f 100644
--- a/drivers/gpu/drm/amd/amdgpu/amdgpu_amdkfd_gpuvm.c
+++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_amdkfd_gpuvm.c
@@ -1227,7 +1227,7 @@ int amdgpu_amdkfd_gpuvm_alloc_memory_of_gpu(
bp.flags = alloc_flags;
bp.type = bo_type;
bp.resv = NULL;
-   ret = amdgpu_bo_create(adev, &bp, &bo);
+   ret = amdgpu_kfd_bo_create(adev, &bp, &bo);
if (ret) {
pr_debug("Failed to create BO on domain %s. ret %d\n",
domain_string(alloc_domain), ret);
diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_gem.c 
b/drivers/gpu/drm/amd/amdgpu/amdgpu_gem.c
index 8e9b8a6e6ef0..97d19f6b572d 100644
--- a/drivers/gpu/drm/amd/amdgpu/amdgpu_gem.c
+++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_gem.c
@@ -234,7 +234,8 @@ int amdgpu_gem_create_ioctl(struct drm_device *dev, void 
*data,
  AMDGPU_GEM_CREATE_VRAM_CLEARED |
  AMDGPU_GEM_CREATE_VM_ALWAYS_VALID |
  AMDGPU_GEM_CREATE_EXPLICIT_SYNC |
- AMDGPU_GEM_CREATE_ENCRYPTED))
+ AMDGPU_GEM_CREATE_ENCRYPTED |
+ AMDGPU_GEM_USER_KFD))


Please stick with the naming here. And why _USER_KFD and not just _KFD?

Christian.

  
  		return -EINVAL;
  
diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_object.c b/drivers/gpu/drm/amd/amdgpu/amdgpu_object.c

index 0bd22ed1dacf..5ebce6d6784a 100644
--- a/drivers/gpu/drm/amd/amdgpu/amdgpu_object.c
+++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_object.c
@@ -697,6 +697,52 @@ int amdgpu_bo_create(struct amdgpu_device *adev,
return r;
  }
  
+/**

+ * amdgpu_kfd_bo_create - create an &amdgpu_bo buffer object with kfd user flag
+ * @adev: amdgpu device object
+ * @bp: parameters to be used for the buffer object
+ * @bo_ptr: pointer to the buffer object pointer
+ *
+ * Creates an &amdgpu_bo buffer object; and if requested, also creates a
+ * shadow object.
+ * Shadow object is used to backup the original buffer object, and is always
+ * in GTT.
+ *
+ * Returns:
+ * 0 for success or a negative error code on failure.
+ */
+
+int amdgpu_kfd_bo_create(struct amdgpu_device *adev,
+struct amdgpu_bo_param *bp,
+struct amdgpu_bo **bo_ptr)
+{
+   u64 flags = bp->flags;
+   int r;
+
+   bp->flags = bp->flags & ~AMDGPU_GEM_CREATE_SHADOW;
+   bp->flags = bp->flags | AMDGPU_GEM_USER_KFD;
+   r = amdgpu_bo_do_create(adev, bp, bo_ptr);
+   if (r)
+   return r;
+
+   if ((flags & AMDGPU_GEM_CREATE_SHADOW) && !(adev->flags & AMD_IS_APU)) {
+   if (!bp->resv)
+   WARN_ON(dma_resv_lock((*bo_ptr)->tbo.base.resv,
+   NULL));
+
+   r = amdgpu_bo_create_shadow(adev, bp->size, *bo_ptr);
+
+   if (!bp->resv)
+   dma_resv_unlock((*bo_ptr)->tbo.base.resv);
+
+   if (r)
+   amdgpu_bo_unref(bo_ptr);
+   }
+
+   return r;
+}
+
+
  /**
   * amdgpu_bo_validate - validate an &amdgpu_bo buffer object
   * @bo: pointer to the buffer object
@@ -1309,7 +1355,7 @@ void amdgpu_bo_release_notify(struct ttm_buffer_object 
*bo)
  
  	abo = ttm_to_amdgpu_bo(bo);
  
-	if (abo->kfd_bo)

+   if (abo->flags & AMDGPU_GEM_USER_KFD)
amdgpu_amdkfd_unreserve_memory_limit(abo);
  
  	/* We only remove the fence if the resv has individualized. */

diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_object.h 
b/drivers/gpu/drm/amd/amdgpu/amdgpu_object.h
index 8cd96c9330dd..665ee0015f06 100644
--- a/drivers/gpu/drm/amd/amdgpu/amdgpu_object.h
+++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_object.h
@@ -245,6 +245,9 @@ void amdgpu_bo_placement_from_domain(struct amdgpu_bo *abo, 
u32 domain);
  int amdgpu_bo_create(struct amdgpu_device *adev,
 struct amdgpu_bo_param *bp,
 struct amdgpu_bo **bo_ptr);
+int amdgpu_kfd_bo_create(struct amdgpu_device *adev,
+struct amdgpu_bo_param *bp,
+struct amdgpu_bo **bo_ptr);
  int amdgpu_bo_create_reserved(struct amdgpu_device *adev,
  unsigned long size, int align,
  u32 domain, struct amdgpu_bo **bo

[PATCH 3/3] drm/amdgpu: drm/amdkfd: add amdgpu_kfd_bo struct

2021-03-02 Thread Nirmoy Das
Implement a new struct based on amdgpu_bo base class
for BOs created by kfd device so that kfd related memeber
of amdgpu_bo can be moved there.

Signed-off-by: Nirmoy Das 
---
 .../gpu/drm/amd/amdgpu/amdgpu_amdkfd_gpuvm.c  | 10 --
 drivers/gpu/drm/amd/amdgpu/amdgpu_mn.c|  3 +-
 drivers/gpu/drm/amd/amdgpu/amdgpu_object.c| 32 ++-
 drivers/gpu/drm/amd/amdgpu/amdgpu_object.h|  8 -
 4 files changed, 40 insertions(+), 13 deletions(-)

diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_amdkfd_gpuvm.c 
b/drivers/gpu/drm/amd/amdgpu/amdgpu_amdkfd_gpuvm.c
index 57798707cd5f..1f52ae4de609 100644
--- a/drivers/gpu/drm/amd/amdgpu/amdgpu_amdkfd_gpuvm.c
+++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_amdkfd_gpuvm.c
@@ -1152,6 +1152,7 @@ int amdgpu_amdkfd_gpuvm_alloc_memory_of_gpu(
struct sg_table *sg = NULL;
uint64_t user_addr = 0;
struct amdgpu_bo *bo;
+   struct amdgpu_kfd_bo *kbo;
struct amdgpu_bo_param bp;
u32 domain, alloc_domain;
u64 alloc_flags;
@@ -1227,17 +1228,20 @@ int amdgpu_amdkfd_gpuvm_alloc_memory_of_gpu(
bp.flags = alloc_flags;
bp.type = bo_type;
bp.resv = NULL;
-   ret = amdgpu_kfd_bo_create(adev, &bp, &bo);
+   ret = amdgpu_kfd_bo_create(adev, &bp, &kbo);
if (ret) {
pr_debug("Failed to create BO on domain %s. ret %d\n",
domain_string(alloc_domain), ret);
goto err_bo_create;
}
+
+   bo = &kbo->bo;
if (bo_type == ttm_bo_type_sg) {
bo->tbo.sg = sg;
bo->tbo.ttm->sg = sg;
}
-   bo->kfd_bo = *mem;
+
+   kbo->kfd_bo = *mem;
(*mem)->bo = bo;
if (user_addr)
bo->flags |= AMDGPU_AMDKFD_USERPTR_BO;
@@ -1261,7 +1265,7 @@ int amdgpu_amdkfd_gpuvm_alloc_memory_of_gpu(
 
 allocate_init_user_pages_failed:
remove_kgd_mem_from_kfd_bo_list(*mem, avm->process_info);
-   amdgpu_bo_unref(&bo);
+   amdgpu_kfd_bo_unref(&kbo);
/* Don't unreserve system mem limit twice */
goto err_reserve_limit;
 err_bo_create:
diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_mn.c 
b/drivers/gpu/drm/amd/amdgpu/amdgpu_mn.c
index 1da67cf812b1..eaaf4940abcb 100644
--- a/drivers/gpu/drm/amd/amdgpu/amdgpu_mn.c
+++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_mn.c
@@ -102,6 +102,7 @@ static bool amdgpu_mn_invalidate_hsa(struct 
mmu_interval_notifier *mni,
 unsigned long cur_seq)
 {
struct amdgpu_bo *bo = container_of(mni, struct amdgpu_bo, notifier);
+   struct amdgpu_kfd_bo *kbo = container_of(bo, struct amdgpu_kfd_bo, bo);
struct amdgpu_device *adev = amdgpu_ttm_adev(bo->tbo.bdev);
 
if (!mmu_notifier_range_blockable(range))
@@ -111,7 +112,7 @@ static bool amdgpu_mn_invalidate_hsa(struct 
mmu_interval_notifier *mni,
 
mmu_interval_set_seq(mni, cur_seq);
 
-   amdgpu_amdkfd_evict_userptr(bo->kfd_bo, bo->notifier.mm);
+   amdgpu_amdkfd_evict_userptr(kbo->kfd_bo, bo->notifier.mm);
mutex_unlock(&adev->notifier_lock);
 
return true;
diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_object.c 
b/drivers/gpu/drm/amd/amdgpu/amdgpu_object.c
index 5ebce6d6784a..af40eb869995 100644
--- a/drivers/gpu/drm/amd/amdgpu/amdgpu_object.c
+++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_object.c
@@ -551,8 +551,10 @@ static int amdgpu_bo_do_create(struct amdgpu_device *adev,
 
acc_size = ttm_bo_dma_acc_size(&adev->mman.bdev, size,
   sizeof(struct amdgpu_bo));
+   if (bp->bo_ptr_size < sizeof(struct amdgpu_bo))
+   bp->bo_ptr_size = sizeof(struct amdgpu_bo);
 
-   bo = kzalloc(sizeof(struct amdgpu_bo), GFP_KERNEL);
+   bo = kzalloc(bp->bo_ptr_size, GFP_KERNEL);
if (bo == NULL)
return -ENOMEM;
drm_gem_private_object_init(adev_to_drm(adev), &bo->tbo.base, size);
@@ -714,35 +716,37 @@ int amdgpu_bo_create(struct amdgpu_device *adev,
 
 int amdgpu_kfd_bo_create(struct amdgpu_device *adev,
 struct amdgpu_bo_param *bp,
-struct amdgpu_bo **bo_ptr)
+struct amdgpu_kfd_bo **kfd_bo_ptr)
 {
+   struct amdgpu_bo *bo_ptr;
u64 flags = bp->flags;
int r;
 
bp->flags = bp->flags & ~AMDGPU_GEM_CREATE_SHADOW;
bp->flags = bp->flags | AMDGPU_GEM_USER_KFD;
-   r = amdgpu_bo_do_create(adev, bp, bo_ptr);
+   bp->bo_ptr_size = sizeof(struct amdgpu_kfd_bo);
+   r = amdgpu_bo_do_create(adev, bp, &bo_ptr);
if (r)
return r;
 
+   *kfd_bo_ptr = (struct amdgpu_kfd_bo *)bo_ptr;
if ((flags & AMDGPU_GEM_CREATE_SHADOW) && !(adev->flags & AMD_IS_APU)) {
if (!bp->resv)
-   WARN_ON(dma_resv_lock((*bo_ptr)->tbo.base.resv,
+   WARN_ON(dma_resv_lock((*kfd_bo_ptr)->bo.tbo.base.resv,
  

[PATCH 1/3] drm/amdgpu: drm/amdkfd: split amdgpu_mn_register

2021-03-02 Thread Nirmoy Das
Split amdgpu_mn_register() into two functions to avoid unnecessary
bo->kfd_bo check.

Signed-off-by: Nirmoy Das 
---
 .../gpu/drm/amd/amdgpu/amdgpu_amdkfd_gpuvm.c  |  2 +-
 drivers/gpu/drm/amd/amdgpu/amdgpu_mn.c| 21 +++
 drivers/gpu/drm/amd/amdgpu/amdgpu_mn.h|  8 +++
 3 files changed, 26 insertions(+), 5 deletions(-)

diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_amdkfd_gpuvm.c 
b/drivers/gpu/drm/amd/amdgpu/amdgpu_amdkfd_gpuvm.c
index 99ad4e1d0896..89d0e4f7c6a8 100644
--- a/drivers/gpu/drm/amd/amdgpu/amdgpu_amdkfd_gpuvm.c
+++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_amdkfd_gpuvm.c
@@ -571,7 +571,7 @@ static int init_user_pages(struct kgd_mem *mem, uint64_t 
user_addr)
goto out;
}
 
-   ret = amdgpu_mn_register(bo, user_addr);
+   ret = amdgpu_mn_register_hsa(bo, user_addr);
if (ret) {
pr_err("%s: Failed to register MMU notifier: %d\n",
   __func__, ret);
diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_mn.c 
b/drivers/gpu/drm/amd/amdgpu/amdgpu_mn.c
index 828b5167ff12..1da67cf812b1 100644
--- a/drivers/gpu/drm/amd/amdgpu/amdgpu_mn.c
+++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_mn.c
@@ -132,15 +132,28 @@ static const struct mmu_interval_notifier_ops 
amdgpu_mn_hsa_ops = {
  */
 int amdgpu_mn_register(struct amdgpu_bo *bo, unsigned long addr)
 {
-   if (bo->kfd_bo)
-   return mmu_interval_notifier_insert(&bo->notifier, current->mm,
-   addr, amdgpu_bo_size(bo),
-   &amdgpu_mn_hsa_ops);
return mmu_interval_notifier_insert(&bo->notifier, current->mm, addr,
amdgpu_bo_size(bo),
&amdgpu_mn_gfx_ops);
 }
 
+/**
+ * amdgpu_mn_register_hsa - register a BO for notifier updates
+ *
+ * @bo: amdgpu buffer object
+ * @addr: userptr addr we should monitor
+ *
+ * Registers a mmu_notifier for the given kfd BO at the specified address.
+ * Returns 0 on success, -ERRNO if anything goes wrong.
+ */
+
+int amdgpu_mn_register_hsa(struct amdgpu_bo *bo, unsigned long addr)
+{
+   return mmu_interval_notifier_insert(&bo->notifier, current->mm, addr,
+   amdgpu_bo_size(bo),
+   &amdgpu_mn_hsa_ops);
+}
+
 /**
  * amdgpu_mn_unregister - unregister a BO for notifier updates
  *
diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_mn.h 
b/drivers/gpu/drm/amd/amdgpu/amdgpu_mn.h
index a292238f75eb..565ee5a0a3ad 100644
--- a/drivers/gpu/drm/amd/amdgpu/amdgpu_mn.h
+++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_mn.h
@@ -32,6 +32,7 @@
 
 #if defined(CONFIG_HMM_MIRROR)
 int amdgpu_mn_register(struct amdgpu_bo *bo, unsigned long addr);
+int amdgpu_mn_register_hsa(struct amdgpu_bo *bo, unsigned long addr);
 void amdgpu_mn_unregister(struct amdgpu_bo *bo);
 #else
 static inline int amdgpu_mn_register(struct amdgpu_bo *bo, unsigned long addr)
@@ -40,6 +41,13 @@ static inline int amdgpu_mn_register(struct amdgpu_bo *bo, 
unsigned long addr)
  "add CONFIG_ZONE_DEVICE=y in config file to fix this\n");
return -ENODEV;
 }
+
+static inline int amdgpu_mn_register_hsa(struct amdgpu_bo *bo, unsigned long 
addr)
+{
+   DRM_WARN_ONCE("HMM_MIRROR kernel config option is not enabled, "
+ "add CONFIG_ZONE_DEVICE=y in config file to fix this\n");
+   return -ENODEV;
+}
 static inline void amdgpu_mn_unregister(struct amdgpu_bo *bo) {}
 #endif
 
-- 
2.30.1

___
amd-gfx mailing list
amd-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/amd-gfx


[PATCH 2/3] drm/amdgpu: introduce kfd user flag for amdgpu_bo

2021-03-02 Thread Nirmoy Das
Introduce a new flag for amdgpu_bo->flags to identify if
a BO is created by KFD.

Signed-off-by: Nirmoy Das 
---
 .../gpu/drm/amd/amdgpu/amdgpu_amdkfd_gpuvm.c  |  2 +-
 drivers/gpu/drm/amd/amdgpu/amdgpu_gem.c   |  3 +-
 drivers/gpu/drm/amd/amdgpu/amdgpu_object.c| 48 ++-
 drivers/gpu/drm/amd/amdgpu/amdgpu_object.h|  3 ++
 drivers/gpu/drm/amd/amdgpu/amdgpu_ttm.c   |  2 +-
 include/uapi/drm/amdgpu_drm.h |  5 ++
 6 files changed, 59 insertions(+), 4 deletions(-)

diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_amdkfd_gpuvm.c 
b/drivers/gpu/drm/amd/amdgpu/amdgpu_amdkfd_gpuvm.c
index 89d0e4f7c6a8..57798707cd5f 100644
--- a/drivers/gpu/drm/amd/amdgpu/amdgpu_amdkfd_gpuvm.c
+++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_amdkfd_gpuvm.c
@@ -1227,7 +1227,7 @@ int amdgpu_amdkfd_gpuvm_alloc_memory_of_gpu(
bp.flags = alloc_flags;
bp.type = bo_type;
bp.resv = NULL;
-   ret = amdgpu_bo_create(adev, &bp, &bo);
+   ret = amdgpu_kfd_bo_create(adev, &bp, &bo);
if (ret) {
pr_debug("Failed to create BO on domain %s. ret %d\n",
domain_string(alloc_domain), ret);
diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_gem.c 
b/drivers/gpu/drm/amd/amdgpu/amdgpu_gem.c
index 8e9b8a6e6ef0..97d19f6b572d 100644
--- a/drivers/gpu/drm/amd/amdgpu/amdgpu_gem.c
+++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_gem.c
@@ -234,7 +234,8 @@ int amdgpu_gem_create_ioctl(struct drm_device *dev, void 
*data,
  AMDGPU_GEM_CREATE_VRAM_CLEARED |
  AMDGPU_GEM_CREATE_VM_ALWAYS_VALID |
  AMDGPU_GEM_CREATE_EXPLICIT_SYNC |
- AMDGPU_GEM_CREATE_ENCRYPTED))
+ AMDGPU_GEM_CREATE_ENCRYPTED |
+ AMDGPU_GEM_USER_KFD))
 
return -EINVAL;
 
diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_object.c 
b/drivers/gpu/drm/amd/amdgpu/amdgpu_object.c
index 0bd22ed1dacf..5ebce6d6784a 100644
--- a/drivers/gpu/drm/amd/amdgpu/amdgpu_object.c
+++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_object.c
@@ -697,6 +697,52 @@ int amdgpu_bo_create(struct amdgpu_device *adev,
return r;
 }
 
+/**
+ * amdgpu_kfd_bo_create - create an &amdgpu_bo buffer object with kfd user flag
+ * @adev: amdgpu device object
+ * @bp: parameters to be used for the buffer object
+ * @bo_ptr: pointer to the buffer object pointer
+ *
+ * Creates an &amdgpu_bo buffer object; and if requested, also creates a
+ * shadow object.
+ * Shadow object is used to backup the original buffer object, and is always
+ * in GTT.
+ *
+ * Returns:
+ * 0 for success or a negative error code on failure.
+ */
+
+int amdgpu_kfd_bo_create(struct amdgpu_device *adev,
+struct amdgpu_bo_param *bp,
+struct amdgpu_bo **bo_ptr)
+{
+   u64 flags = bp->flags;
+   int r;
+
+   bp->flags = bp->flags & ~AMDGPU_GEM_CREATE_SHADOW;
+   bp->flags = bp->flags | AMDGPU_GEM_USER_KFD;
+   r = amdgpu_bo_do_create(adev, bp, bo_ptr);
+   if (r)
+   return r;
+
+   if ((flags & AMDGPU_GEM_CREATE_SHADOW) && !(adev->flags & AMD_IS_APU)) {
+   if (!bp->resv)
+   WARN_ON(dma_resv_lock((*bo_ptr)->tbo.base.resv,
+   NULL));
+
+   r = amdgpu_bo_create_shadow(adev, bp->size, *bo_ptr);
+
+   if (!bp->resv)
+   dma_resv_unlock((*bo_ptr)->tbo.base.resv);
+
+   if (r)
+   amdgpu_bo_unref(bo_ptr);
+   }
+
+   return r;
+}
+
+
 /**
  * amdgpu_bo_validate - validate an &amdgpu_bo buffer object
  * @bo: pointer to the buffer object
@@ -1309,7 +1355,7 @@ void amdgpu_bo_release_notify(struct ttm_buffer_object 
*bo)
 
abo = ttm_to_amdgpu_bo(bo);
 
-   if (abo->kfd_bo)
+   if (abo->flags & AMDGPU_GEM_USER_KFD)
amdgpu_amdkfd_unreserve_memory_limit(abo);
 
/* We only remove the fence if the resv has individualized. */
diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_object.h 
b/drivers/gpu/drm/amd/amdgpu/amdgpu_object.h
index 8cd96c9330dd..665ee0015f06 100644
--- a/drivers/gpu/drm/amd/amdgpu/amdgpu_object.h
+++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_object.h
@@ -245,6 +245,9 @@ void amdgpu_bo_placement_from_domain(struct amdgpu_bo *abo, 
u32 domain);
 int amdgpu_bo_create(struct amdgpu_device *adev,
 struct amdgpu_bo_param *bp,
 struct amdgpu_bo **bo_ptr);
+int amdgpu_kfd_bo_create(struct amdgpu_device *adev,
+struct amdgpu_bo_param *bp,
+struct amdgpu_bo **bo_ptr);
 int amdgpu_bo_create_reserved(struct amdgpu_device *adev,
  unsigned long size, int align,
  u32 domain, struct amdgpu_bo **bo_ptr,
diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_ttm.c 
b/drivers/gpu/drm/amd/amdgpu/amdgpu_ttm.c
index 7b2db779f313..d36b1

Re: [PATCH] drm/ttm: ioremap buffer according to TTM mem caching setting

2021-03-02 Thread Christian König

Hi guys,

adding the usual suspects direct. Does anybody of hand know how to check 
if an architecture supports ioremap_cache()?


For now we only need this on X86, but I would feel better if we don't 
use an #ifdef here.


Regards,
Christian.

Am 02.03.21 um 05:12 schrieb kernel test robot:

Hi Oak,

Thank you for the patch! Yet something to improve:

[auto build test ERROR on drm-intel/for-linux-next]
[also build test ERROR on drm-tip/drm-tip linus/master v5.12-rc1 next-20210302]
[cannot apply to tegra-drm/drm/tegra/for-next drm-exynos/exynos-drm-next 
drm/drm-next]
[If your patch is applied to the wrong git tree, kindly drop us a note.
And when submitting patch, we suggest to use '--base' as documented in
https://git-scm.com/docs/git-format-patch]

url:
https://github.com/0day-ci/linux/commits/Oak-Zeng/drm-ttm-ioremap-buffer-according-to-TTM-mem-caching-setting/20210302-064500
base:   git://anongit.freedesktop.org/drm-intel for-linux-next
config: parisc-randconfig-r012-20210302 (attached as .config)
compiler: hppa-linux-gcc (GCC) 9.3.0
reproduce (this is a W=1 build):
 wget 
https://raw.githubusercontent.com/intel/lkp-tests/master/sbin/make.cross -O 
~/bin/make.cross
 chmod +x ~/bin/make.cross
 # 
https://github.com/0day-ci/linux/commit/225bb3711439ec559dd72ae5af8e62d34ea60a64
 git remote add linux-review https://github.com/0day-ci/linux
 git fetch --no-tags linux-review 
Oak-Zeng/drm-ttm-ioremap-buffer-according-to-TTM-mem-caching-setting/20210302-064500
 git checkout 225bb3711439ec559dd72ae5af8e62d34ea60a64
 # save the attached .config to linux build tree
 COMPILER_INSTALL_PATH=$HOME/0day COMPILER=gcc-9.3.0 make.cross 
ARCH=parisc

If you fix the issue, kindly add following tag as appropriate
Reported-by: kernel test robot 

All errors (new ones prefixed by >>):

drivers/gpu/drm/ttm/ttm_bo_util.c: In function 'ttm_resource_ioremap':

drivers/gpu/drm/ttm/ttm_bo_util.c:95:11: error: implicit declaration of 
function 'ioremap_cache'; did you mean 'ioremap_uc'? 
[-Werror=implicit-function-declaration]

   95 |addr = ioremap_cache(mem->bus.offset, bus_size);
  |   ^
  |   ioremap_uc
drivers/gpu/drm/ttm/ttm_bo_util.c:95:9: warning: assignment to 'void *' 
from 'int' makes pointer from integer without a cast [-Wint-conversion]
   95 |addr = ioremap_cache(mem->bus.offset, bus_size);
  | ^
drivers/gpu/drm/ttm/ttm_bo_util.c: In function 'ttm_bo_ioremap':
drivers/gpu/drm/ttm/ttm_bo_util.c:379:17: warning: assignment to 'void *' 
from 'int' makes pointer from integer without a cast [-Wint-conversion]
  379 |map->virtual = ioremap_cache(bo->mem.bus.offset + offset,
  | ^
drivers/gpu/drm/ttm/ttm_bo_util.c: In function 'ttm_bo_vmap':
drivers/gpu/drm/ttm/ttm_bo_util.c:500:16: warning: assignment to 'void *' 
from 'int' makes pointer from integer without a cast [-Wint-conversion]
  500 |vaddr_iomem = ioremap_cache(mem->bus.offset,
  |^
cc1: some warnings being treated as errors


vim +95 drivers/gpu/drm/ttm/ttm_bo_util.c

 74 
 75 static int ttm_resource_ioremap(struct ttm_bo_device *bdev,
 76struct ttm_resource *mem,
 77void **virtual)
 78 {
 79 int ret;
 80 void *addr;
 81 
 82 *virtual = NULL;
 83 ret = ttm_mem_io_reserve(bdev, mem);
 84 if (ret || !mem->bus.is_iomem)
 85 return ret;
 86 
 87 if (mem->bus.addr) {
 88 addr = mem->bus.addr;
 89 } else {
 90 size_t bus_size = (size_t)mem->num_pages << PAGE_SHIFT;
 91 
 92 if (mem->bus.caching == ttm_write_combined)
 93 addr = ioremap_wc(mem->bus.offset, bus_size);
 94 else if (mem->bus.caching == ttm_cached)
   > 95  addr = ioremap_cache(mem->bus.offset, 
bus_size);
 96 else
 97 addr = ioremap(mem->bus.offset, bus_size);
 98 if (!addr) {
 99 ttm_mem_io_free(bdev, mem);
100 return -ENOMEM;
101 }
102 }
103 *virtual = addr;
104 return 0;
105 }
106 

---
0-DAY CI Kernel Test Service, Intel Corporation
https://lists.01.org/hyperkitty/list/kbuild-...@lists.01.org

___
amd-gfx mailing list
amd-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/amd-gfx


___
amd-gfx mailing list
amd-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/amd-gfx


[PATCH] drm/amd/display: Fix off by one in hdmi_14_process_transaction()

2021-03-02 Thread Dan Carpenter
The hdcp_i2c_offsets[] array did not have an entry for
HDCP_MESSAGE_ID_WRITE_CONTENT_STREAM_TYPE so it led to an off by one
read overflow.  I added an entry and copied the 0x0 value for the offset
from similar code in drivers/gpu/drm/amd/display/modules/hdcp/hdcp_ddc.c.

I also declared several of these arrays as having HDCP_MESSAGE_ID_MAX
entries.  This doesn't change the code, but it's just a belt and
suspenders approach to try future proof the code.

Fixes: 4c283fdac08a ("drm/amd/display: Add HDCP module")
Signed-off-by: Dan Carpenter 
---
>From static analysis, as mentioned in the commit message the offset
is basically an educated guess.

I reported this bug on Apr 16, 2020 but I guess we lost take of it.

 drivers/gpu/drm/amd/display/dc/hdcp/hdcp_msg.c | 9 +
 1 file changed, 5 insertions(+), 4 deletions(-)

diff --git a/drivers/gpu/drm/amd/display/dc/hdcp/hdcp_msg.c 
b/drivers/gpu/drm/amd/display/dc/hdcp/hdcp_msg.c
index 5e384a8a83dc..51855a2624cf 100644
--- a/drivers/gpu/drm/amd/display/dc/hdcp/hdcp_msg.c
+++ b/drivers/gpu/drm/amd/display/dc/hdcp/hdcp_msg.c
@@ -39,7 +39,7 @@
 #define HDCP14_KSV_SIZE 5
 #define HDCP14_MAX_KSV_FIFO_SIZE 127*HDCP14_KSV_SIZE
 
-static const bool hdcp_cmd_is_read[] = {
+static const bool hdcp_cmd_is_read[HDCP_MESSAGE_ID_MAX] = {
[HDCP_MESSAGE_ID_READ_BKSV] = true,
[HDCP_MESSAGE_ID_READ_RI_R0] = true,
[HDCP_MESSAGE_ID_READ_PJ] = true,
@@ -75,7 +75,7 @@ static const bool hdcp_cmd_is_read[] = {
[HDCP_MESSAGE_ID_WRITE_CONTENT_STREAM_TYPE] = false
 };
 
-static const uint8_t hdcp_i2c_offsets[] = {
+static const uint8_t hdcp_i2c_offsets[HDCP_MESSAGE_ID_MAX] = {
[HDCP_MESSAGE_ID_READ_BKSV] = 0x0,
[HDCP_MESSAGE_ID_READ_RI_R0] = 0x8,
[HDCP_MESSAGE_ID_READ_PJ] = 0xA,
@@ -106,7 +106,8 @@ static const uint8_t hdcp_i2c_offsets[] = {
[HDCP_MESSAGE_ID_WRITE_REPEATER_AUTH_SEND_ACK] = 0x60,
[HDCP_MESSAGE_ID_WRITE_REPEATER_AUTH_STREAM_MANAGE] = 0x60,
[HDCP_MESSAGE_ID_READ_REPEATER_AUTH_STREAM_READY] = 0x80,
-   [HDCP_MESSAGE_ID_READ_RXSTATUS] = 0x70
+   [HDCP_MESSAGE_ID_READ_RXSTATUS] = 0x70,
+   [HDCP_MESSAGE_ID_WRITE_CONTENT_STREAM_TYPE] = 0x0,
 };
 
 struct protection_properties {
@@ -184,7 +185,7 @@ static const struct protection_properties 
hdmi_14_protection = {
.process_transaction = hdmi_14_process_transaction
 };
 
-static const uint32_t hdcp_dpcd_addrs[] = {
+static const uint32_t hdcp_dpcd_addrs[HDCP_MESSAGE_ID_MAX] = {
[HDCP_MESSAGE_ID_READ_BKSV] = 0x68000,
[HDCP_MESSAGE_ID_READ_RI_R0] = 0x68005,
[HDCP_MESSAGE_ID_READ_PJ] = 0x,
-- 
2.30.1

___
amd-gfx mailing list
amd-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/amd-gfx


RE: [PATCH 2/2] drm/amd/pm: update existing gpu_metrics interfaces V2

2021-03-02 Thread Lazar, Lijo
[AMD Public Use]

>> gpu_metrics->energy_accumulator = (uint64_t)metrics.EnergyAccumulator
>> gpu_metrics->pcie_link_width = (uint16_t)metrics->PcieWidth

The casts seem not necessary. 

Series is Reviewed-by: Lijo Lazar 

Thanks,
Lijo

-Original Message-
From: Quan, Evan  
Sent: Tuesday, March 2, 2021 8:48 AM
To: amd-gfx@lists.freedesktop.org
Cc: Deucher, Alexander ; Lazar, Lijo 
; Quan, Evan 
Subject: [PATCH 2/2] drm/amd/pm: update existing gpu_metrics interfaces V2

Update the gpu_metrics interface implementations to use the latest upgraded 
data structures.

V2: fit the data type change of energy_accumulator

Change-Id: Ibdbb1c3386de12c53bea3b8c68bbeebd14787287
Signed-off-by: Evan Quan 
---
 drivers/gpu/drm/amd/pm/inc/smu_v11_0.h|  8 ++--
 .../gpu/drm/amd/pm/swsmu/smu11/arcturus_ppt.c | 16 
 .../gpu/drm/amd/pm/swsmu/smu11/navi10_ppt.c   | 40 +--
 .../amd/pm/swsmu/smu11/sienna_cichlid_ppt.c   | 14 +++
 .../gpu/drm/amd/pm/swsmu/smu11/smu_v11_0.c|  4 +-
 .../gpu/drm/amd/pm/swsmu/smu11/vangogh_ppt.c  | 10 ++---
 .../gpu/drm/amd/pm/swsmu/smu12/renoir_ppt.c   | 10 ++---
 drivers/gpu/drm/amd/pm/swsmu/smu_cmn.c|  6 +++
 8 files changed, 57 insertions(+), 51 deletions(-)

diff --git a/drivers/gpu/drm/amd/pm/inc/smu_v11_0.h 
b/drivers/gpu/drm/amd/pm/inc/smu_v11_0.h
index d400f75e9202..b5c4aff501ee 100644
--- a/drivers/gpu/drm/amd/pm/inc/smu_v11_0.h
+++ b/drivers/gpu/drm/amd/pm/inc/smu_v11_0.h
@@ -61,8 +61,8 @@
 #define LINK_WIDTH_MAX 6
 #define LINK_SPEED_MAX 3
 
-static __maybe_unused uint8_t link_width[] = {0, 1, 2, 4, 8, 12, 16}; -static 
__maybe_unused uint8_t link_speed[] = {25, 50, 80, 160};
+static __maybe_unused uint16_t link_width[] = {0, 1, 2, 4, 8, 12, 16}; 
+static __maybe_unused uint16_t link_speed[] = {25, 50, 80, 160};

 static const
 struct smu_temperature_range __maybe_unused smu11_thermal_policy[] = @@ 
-290,11 +290,11 @@ int smu_v11_0_get_dpm_level_range(struct smu_context *smu,
 
 int smu_v11_0_get_current_pcie_link_width_level(struct smu_context *smu);
 
-uint8_t smu_v11_0_get_current_pcie_link_width(struct smu_context *smu);
+uint16_t smu_v11_0_get_current_pcie_link_width(struct smu_context 
+*smu);
 
 int smu_v11_0_get_current_pcie_link_speed_level(struct smu_context *smu);
 
-uint8_t smu_v11_0_get_current_pcie_link_speed(struct smu_context *smu);
+uint16_t smu_v11_0_get_current_pcie_link_speed(struct smu_context 
+*smu);
 
 int smu_v11_0_gfx_ulv_control(struct smu_context *smu,
  bool enablement);
diff --git a/drivers/gpu/drm/amd/pm/swsmu/smu11/arcturus_ppt.c 
b/drivers/gpu/drm/amd/pm/swsmu/smu11/arcturus_ppt.c
index 50d5f2256480..5bedf0315d14 100644
--- a/drivers/gpu/drm/amd/pm/swsmu/smu11/arcturus_ppt.c
+++ b/drivers/gpu/drm/amd/pm/swsmu/smu11/arcturus_ppt.c
@@ -236,7 +236,7 @@ static int arcturus_tables_init(struct smu_context *smu)
return -ENOMEM;
smu_table->metrics_time = 0;
 
-   smu_table->gpu_metrics_table_size = sizeof(struct gpu_metrics_v1_0);
+   smu_table->gpu_metrics_table_size = sizeof(struct gpu_metrics_v1_1);
smu_table->gpu_metrics_table = 
kzalloc(smu_table->gpu_metrics_table_size, GFP_KERNEL);
if (!smu_table->gpu_metrics_table) {
kfree(smu_table->metrics_table);
@@ -2211,7 +2211,7 @@ static void arcturus_log_thermal_throttling_event(struct 
smu_context *smu)
kgd2kfd_smi_event_throttle(smu->adev->kfd.dev, throttler_status);  }
 
-static int arcturus_get_current_pcie_link_speed(struct smu_context *smu)
+static uint16_t arcturus_get_current_pcie_link_speed(struct smu_context 
+*smu)
 {
struct amdgpu_device *adev = smu->adev;
uint32_t esm_ctrl;
@@ -2219,7 +2219,7 @@ static int arcturus_get_current_pcie_link_speed(struct 
smu_context *smu)
/* TODO: confirm this on real target */
esm_ctrl = RREG32_PCIE(smnPCIE_ESM_CTRL);
if ((esm_ctrl >> 15) & 0x1)
-   return (((esm_ctrl >> 8) & 0x3F) + 128);
+   return (uint16_t)(((esm_ctrl >> 8) & 0x3F) + 128);
 
return smu_v11_0_get_current_pcie_link_speed(smu);
 }
@@ -2228,8 +2228,8 @@ static ssize_t arcturus_get_gpu_metrics(struct 
smu_context *smu,
void **table)
 {
struct smu_table_context *smu_table = &smu->smu_table;
-   struct gpu_metrics_v1_0 *gpu_metrics =
-   (struct gpu_metrics_v1_0 *)smu_table->gpu_metrics_table;
+   struct gpu_metrics_v1_1 *gpu_metrics =
+   (struct gpu_metrics_v1_1 *)smu_table->gpu_metrics_table;
SmuMetrics_t metrics;
int ret = 0;
 
@@ -2239,7 +2239,7 @@ static ssize_t arcturus_get_gpu_metrics(struct 
smu_context *smu,
if (ret)
return ret;
 
-   smu_cmn_init_soft_gpu_metrics(gpu_metrics, 1, 0);
+   smu_cmn_init_soft_gpu_metrics(gpu_metrics, 1, 1);
 
gpu_metrics->temperature_edge = metrics.TemperatureEdge;
gpu_m

Re: [PATCH] drm/amd/pm: correct the name of one function for vangogh

2021-03-02 Thread Wang, Kevin(Yang)
[AMD Official Use Only - Internal Distribution Only]

Reviewed-by: Kevin Wang 

Best Regards,
Kevin

From: Du, Xiaojian 
Sent: Tuesday, March 2, 2021 6:20 PM
To: amd-gfx@lists.freedesktop.org 
Cc: Huang, Ray ; Quan, Evan ; Wang, 
Kevin(Yang) ; Lazar, Lijo ; Du, 
Xiaojian 
Subject: [PATCH] drm/amd/pm: correct the name of one function for vangogh

This patch is to correct the name of one function for vangogh.
This function is used to print the clock levels of all kinds of IP
components.

Signed-off-by: Xiaojian Du 
---
 drivers/gpu/drm/amd/pm/swsmu/smu11/vangogh_ppt.c | 4 ++--
 1 file changed, 2 insertions(+), 2 deletions(-)

diff --git a/drivers/gpu/drm/amd/pm/swsmu/smu11/vangogh_ppt.c 
b/drivers/gpu/drm/amd/pm/swsmu/smu11/vangogh_ppt.c
index 3f815430e67f..2bc55de1812c 100644
--- a/drivers/gpu/drm/amd/pm/swsmu/smu11/vangogh_ppt.c
+++ b/drivers/gpu/drm/amd/pm/swsmu/smu11/vangogh_ppt.c
@@ -442,7 +442,7 @@ static int vangogh_get_dpm_clk_limited(struct smu_context 
*smu, enum smu_clk_typ
 return 0;
 }

-static int vangogh_print_fine_grain_clk(struct smu_context *smu,
+static int vangogh_print_clk_levels(struct smu_context *smu,
 enum smu_clk_type clk_type, char *buf)
 {
 DpmClocks_t *clk_table = smu->smu_table.clocks_table;
@@ -1869,7 +1869,7 @@ static const struct pptable_funcs vangogh_ppt_funcs = {
 .interrupt_work = smu_v11_0_interrupt_work,
 .get_gpu_metrics = vangogh_get_gpu_metrics,
 .od_edit_dpm_table = vangogh_od_edit_dpm_table,
-   .print_clk_levels = vangogh_print_fine_grain_clk,
+   .print_clk_levels = vangogh_print_clk_levels,
 .set_default_dpm_table = vangogh_set_default_dpm_tables,
 .set_fine_grain_gfx_freq_parameters = 
vangogh_set_fine_grain_gfx_freq_parameters,
 .system_features_control = vangogh_system_features_control,
--
2.25.1

___
amd-gfx mailing list
amd-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/amd-gfx


[PATCH] drm/amd/pm: correct the name of one function for vangogh

2021-03-02 Thread Xiaojian Du
This patch is to correct the name of one function for vangogh.
This function is used to print the clock levels of all kinds of IP
components.

Signed-off-by: Xiaojian Du 
---
 drivers/gpu/drm/amd/pm/swsmu/smu11/vangogh_ppt.c | 4 ++--
 1 file changed, 2 insertions(+), 2 deletions(-)

diff --git a/drivers/gpu/drm/amd/pm/swsmu/smu11/vangogh_ppt.c 
b/drivers/gpu/drm/amd/pm/swsmu/smu11/vangogh_ppt.c
index 3f815430e67f..2bc55de1812c 100644
--- a/drivers/gpu/drm/amd/pm/swsmu/smu11/vangogh_ppt.c
+++ b/drivers/gpu/drm/amd/pm/swsmu/smu11/vangogh_ppt.c
@@ -442,7 +442,7 @@ static int vangogh_get_dpm_clk_limited(struct smu_context 
*smu, enum smu_clk_typ
return 0;
 }
 
-static int vangogh_print_fine_grain_clk(struct smu_context *smu,
+static int vangogh_print_clk_levels(struct smu_context *smu,
enum smu_clk_type clk_type, char *buf)
 {
DpmClocks_t *clk_table = smu->smu_table.clocks_table;
@@ -1869,7 +1869,7 @@ static const struct pptable_funcs vangogh_ppt_funcs = {
.interrupt_work = smu_v11_0_interrupt_work,
.get_gpu_metrics = vangogh_get_gpu_metrics,
.od_edit_dpm_table = vangogh_od_edit_dpm_table,
-   .print_clk_levels = vangogh_print_fine_grain_clk,
+   .print_clk_levels = vangogh_print_clk_levels,
.set_default_dpm_table = vangogh_set_default_dpm_tables,
.set_fine_grain_gfx_freq_parameters = 
vangogh_set_fine_grain_gfx_freq_parameters,
.system_features_control = vangogh_system_features_control,
-- 
2.25.1

___
amd-gfx mailing list
amd-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/amd-gfx


Re: [PATCH] drm/amdgpu: fix parameter error of RREG32_PCIE() in amdgpu_regs_pcie

2021-03-02 Thread Wang, Kevin(Yang)
[AMD Public Use]

yes, thanks lijo.

Best Regards,
Kevin


From: Lazar, Lijo 
Sent: Tuesday, March 2, 2021 4:09 PM
To: Wang, Kevin(Yang) ; amd-gfx@lists.freedesktop.org 

Cc: Zhang, Hawking 
Subject: RE: [PATCH] drm/amdgpu: fix parameter error of RREG32_PCIE() in 
amdgpu_regs_pcie

[AMD Public Use]

Same can be done for pcie_write also.

Reviewed-by: Lijo Lazar 

-Original Message-
From: Wang, Kevin(Yang) 
Sent: Tuesday, March 2, 2021 1:34 PM
To: amd-gfx@lists.freedesktop.org
Cc: Zhang, Hawking ; Lazar, Lijo ; 
Wang, Kevin(Yang) 
Subject: [PATCH] drm/amdgpu: fix parameter error of RREG32_PCIE() in 
amdgpu_regs_pcie

the register offset isn't needed division by 4 to pass RREG32_PCIE()

Signed-off-by: Kevin Wang 
---
 drivers/gpu/drm/amd/amdgpu/amdgpu_debugfs.c | 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)

diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_debugfs.c 
b/drivers/gpu/drm/amd/amdgpu/amdgpu_debugfs.c
index a09469f84251..f3434a6f120f 100644
--- a/drivers/gpu/drm/amd/amdgpu/amdgpu_debugfs.c
+++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_debugfs.c
@@ -317,7 +317,7 @@ static ssize_t amdgpu_debugfs_regs_pcie_read(struct file 
*f, char __user *buf,
 while (size) {
 uint32_t value;

-   value = RREG32_PCIE(*pos >> 2);
+   value = RREG32_PCIE(*pos);
 r = put_user(value, (uint32_t *)buf);
 if (r) {
 pm_runtime_mark_last_busy(adev_to_drm(adev)->dev);
--
2.17.1
___
amd-gfx mailing list
amd-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/amd-gfx


RE: [PATCH] drm/amdgpu: fix parameter error of RREG32_PCIE() in amdgpu_regs_pcie

2021-03-02 Thread Lazar, Lijo
[AMD Public Use]

Same can be done for pcie_write also.

Reviewed-by: Lijo Lazar 

-Original Message-
From: Wang, Kevin(Yang)  
Sent: Tuesday, March 2, 2021 1:34 PM
To: amd-gfx@lists.freedesktop.org
Cc: Zhang, Hawking ; Lazar, Lijo ; 
Wang, Kevin(Yang) 
Subject: [PATCH] drm/amdgpu: fix parameter error of RREG32_PCIE() in 
amdgpu_regs_pcie

the register offset isn't needed division by 4 to pass RREG32_PCIE()

Signed-off-by: Kevin Wang 
---
 drivers/gpu/drm/amd/amdgpu/amdgpu_debugfs.c | 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)

diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_debugfs.c 
b/drivers/gpu/drm/amd/amdgpu/amdgpu_debugfs.c
index a09469f84251..f3434a6f120f 100644
--- a/drivers/gpu/drm/amd/amdgpu/amdgpu_debugfs.c
+++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_debugfs.c
@@ -317,7 +317,7 @@ static ssize_t amdgpu_debugfs_regs_pcie_read(struct file 
*f, char __user *buf,
while (size) {
uint32_t value;
 
-   value = RREG32_PCIE(*pos >> 2);
+   value = RREG32_PCIE(*pos);
r = put_user(value, (uint32_t *)buf);
if (r) {
pm_runtime_mark_last_busy(adev_to_drm(adev)->dev);
-- 
2.17.1
___
amd-gfx mailing list
amd-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/amd-gfx


[PATCH] drm/amdgpu: fix parameter error of RREG32_PCIE() in amdgpu_regs_pcie

2021-03-02 Thread Kevin Wang
the register offset isn't needed division by 4 to pass RREG32_PCIE()

Signed-off-by: Kevin Wang 
---
 drivers/gpu/drm/amd/amdgpu/amdgpu_debugfs.c | 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)

diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_debugfs.c 
b/drivers/gpu/drm/amd/amdgpu/amdgpu_debugfs.c
index a09469f84251..f3434a6f120f 100644
--- a/drivers/gpu/drm/amd/amdgpu/amdgpu_debugfs.c
+++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_debugfs.c
@@ -317,7 +317,7 @@ static ssize_t amdgpu_debugfs_regs_pcie_read(struct file 
*f, char __user *buf,
while (size) {
uint32_t value;
 
-   value = RREG32_PCIE(*pos >> 2);
+   value = RREG32_PCIE(*pos);
r = put_user(value, (uint32_t *)buf);
if (r) {
pm_runtime_mark_last_busy(adev_to_drm(adev)->dev);
-- 
2.17.1

___
amd-gfx mailing list
amd-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/amd-gfx