Re: [PATCH] drm/amdgpu: Fix recursive locking warning

2022-02-03 Thread Christian König

Am 04.02.22 um 04:11 schrieb Rajneesh Bhardwaj:

Noticed the below warning while running a pytorch workload on vega10
GPUs. Change to trylock to avoid conflicts with already held reservation
locks.

[  +0.03] WARNING: possible recursive locking detected
[  +0.03] 5.13.0-kfd-rajneesh #1030 Not tainted
[  +0.04] 
[  +0.02] python/4822 is trying to acquire lock:
[  +0.04] 932cd9a259f8 (reservation_ww_class_mutex){+.+.}-{3:3},
at: amdgpu_bo_release_notify+0xc4/0x160 [amdgpu]
[  +0.000203]
   but task is already holding lock:
[  +0.03] 932cbb7181f8 (reservation_ww_class_mutex){+.+.}-{3:3},
at: ttm_eu_reserve_buffers+0x270/0x470 [ttm]
[  +0.17]
   other info that might help us debug this:
[  +0.02]  Possible unsafe locking scenario:

[  +0.03]CPU0
[  +0.02]
[  +0.02]   lock(reservation_ww_class_mutex);
[  +0.04]   lock(reservation_ww_class_mutex);
[  +0.03]
*** DEADLOCK ***

[  +0.02]  May be due to missing lock nesting notation

[  +0.03] 7 locks held by python/4822:
[  +0.03]  #0: 932c4ac028d0 (>mutex){+.+.}-{3:3}, at:
kfd_ioctl_map_memory_to_gpu+0x10b/0x320 [amdgpu]
[  +0.000232]  #1: 932c55e830a8 (>lock#2){+.+.}-{3:3}, at:
amdgpu_amdkfd_gpuvm_map_memory_to_gpu+0x64/0xf60 [amdgpu]
[  +0.000241]  #2: 932cc45b5e68 (&(*mem)->lock){+.+.}-{3:3}, at:
amdgpu_amdkfd_gpuvm_map_memory_to_gpu+0xdf/0xf60 [amdgpu]
[  +0.000236]  #3: b2b35606fd28
(reservation_ww_class_acquire){+.+.}-{0:0}, at:
amdgpu_amdkfd_gpuvm_map_memory_to_gpu+0x232/0xf60 [amdgpu]
[  +0.000235]  #4: 932cbb7181f8
(reservation_ww_class_mutex){+.+.}-{3:3}, at:
ttm_eu_reserve_buffers+0x270/0x470 [ttm]
[  +0.15]  #5: c045f700 (*(sspp++)){}-{0:0}, at:
drm_dev_enter+0x5/0xa0 [drm]
[  +0.38]  #6: 932c52da7078 (>eviction_lock){+.+.}-{3:3},
at: amdgpu_vm_bo_update_mapping+0xd5/0x4f0 [amdgpu]
[  +0.000195]
   stack backtrace:
[  +0.03] CPU: 11 PID: 4822 Comm: python Not tainted
5.13.0-kfd-rajneesh #1030
[  +0.05] Hardware name: GIGABYTE MZ01-CE0-00/MZ01-CE0-00, BIOS F02
08/29/2018
[  +0.03] Call Trace:
[  +0.03]  dump_stack+0x6d/0x89
[  +0.10]  __lock_acquire+0xb93/0x1a90
[  +0.09]  lock_acquire+0x25d/0x2d0
[  +0.05]  ? amdgpu_bo_release_notify+0xc4/0x160 [amdgpu]
[  +0.000184]  ? lock_is_held_type+0xa2/0x110
[  +0.06]  ? amdgpu_bo_release_notify+0xc4/0x160 [amdgpu]
[  +0.000184]  __ww_mutex_lock.constprop.17+0xca/0x1060
[  +0.07]  ? amdgpu_bo_release_notify+0xc4/0x160 [amdgpu]
[  +0.000183]  ? lock_release+0x13f/0x270
[  +0.05]  ? lock_is_held_type+0xa2/0x110
[  +0.06]  ? amdgpu_bo_release_notify+0xc4/0x160 [amdgpu]
[  +0.000183]  amdgpu_bo_release_notify+0xc4/0x160 [amdgpu]
[  +0.000185]  ttm_bo_release+0x4c6/0x580 [ttm]
[  +0.10]  amdgpu_bo_unref+0x1a/0x30 [amdgpu]
[  +0.000183]  amdgpu_vm_free_table+0x76/0xa0 [amdgpu]
[  +0.000189]  amdgpu_vm_free_pts+0xb8/0xf0 [amdgpu]
[  +0.000189]  amdgpu_vm_update_ptes+0x411/0x770 [amdgpu]
[  +0.000191]  amdgpu_vm_bo_update_mapping+0x324/0x4f0 [amdgpu]
[  +0.000191]  amdgpu_vm_bo_update+0x251/0x610 [amdgpu]
[  +0.000191]  update_gpuvm_pte+0xcc/0x290 [amdgpu]
[  +0.000229]  ? amdgpu_vm_bo_map+0xd7/0x130 [amdgpu]
[  +0.000190]  amdgpu_amdkfd_gpuvm_map_memory_to_gpu+0x912/0xf60
[amdgpu]
[  +0.000234]  kfd_ioctl_map_memory_to_gpu+0x182/0x320 [amdgpu]
[  +0.000218]  kfd_ioctl+0x2b9/0x600 [amdgpu]
[  +0.000216]  ? kfd_ioctl_unmap_memory_from_gpu+0x270/0x270 [amdgpu]
[  +0.000216]  ? lock_release+0x13f/0x270
[  +0.06]  ? __fget_files+0x107/0x1e0
[  +0.07]  __x64_sys_ioctl+0x8b/0xd0
[  +0.07]  do_syscall_64+0x36/0x70
[  +0.04]  entry_SYSCALL_64_after_hwframe+0x44/0xae
[  +0.07] RIP: 0033:0x7fbff90a7317
[  +0.04] Code: b3 66 90 48 8b 05 71 4b 2d 00 64 c7 00 26 00 00 00
48 c7 c0 ff ff ff ff c3 66 2e 0f 1f 84 00 00 00 00 00 b8 10 00 00 00 0f
05 <48> 3d 01 f0 ff ff 73 01 c3 48 8b 0d 41 4b 2d 00 f7 d8 64 89 01 48
[  +0.05] RSP: 002b:7fbe301fe648 EFLAGS: 0246 ORIG_RAX:
0010
[  +0.06] RAX: ffda RBX: 7fbcc402d820 RCX:
7fbff90a7317
[  +0.03] RDX: 7fbe301fe690 RSI: c0184b18 RDI:
0004
[  +0.03] RBP: 7fbe301fe690 R08:  R09:
7fbcc402d880
[  +0.03] R10: 02001000 R11: 0246 R12:
c0184b18
[  +0.03] R13: 0004 R14: 7fbf689593a0 R15:
7fbcc402d820

Cc: Christian König 
Cc: Felix Kuehling 
Cc: Alex Deucher 

Fixes: 627b92ef9d7c ("drm/amdgpu: Wipe all VRAM on free when RAS is
enabled")
Signed-off-by: Rajneesh Bhardwaj 


The fixes tag is not necessarily correct, I would remove that.

But apart from that the patch is Reviewed-by: Christian König 
.


Thanks,
Christian.


---
  drivers/gpu/drm/amd/amdgpu/amdgpu_object.c | 3 ++-
  1 file changed, 2 insertions(+), 1 deletion(-)

diff --git 

[PATCH v1 2/3] drm/amdgpu: Don't offset by 2 in FRU EEPROM

2022-02-03 Thread Luben Tuikov
Read buffers no longer expose the I2C address, and so we don't need to
offset by two when we get the read data.

Cc: Alex Deucher 
Cc: Kent Russell 
Cc: Andrey Grodzovsky 
Fixes: bd607166af7fe3 ("drm/amdgpu: Enable reading FRU chip via I2C v3")
Signed-off-by: Luben Tuikov 
---
 drivers/gpu/drm/amd/amdgpu/amdgpu_fru_eeprom.c | 13 -
 1 file changed, 4 insertions(+), 9 deletions(-)

diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_fru_eeprom.c 
b/drivers/gpu/drm/amd/amdgpu/amdgpu_fru_eeprom.c
index 792337433a9ee5..61c4e71e399855 100644
--- a/drivers/gpu/drm/amd/amdgpu/amdgpu_fru_eeprom.c
+++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_fru_eeprom.c
@@ -103,17 +103,13 @@ static int amdgpu_fru_read_eeprom(struct amdgpu_device 
*adev, uint32_t addrptr,
 
 int amdgpu_fru_get_product_info(struct amdgpu_device *adev)
 {
-   unsigned char buf[AMDGPU_PRODUCT_NAME_LEN+2];
+   unsigned char buf[AMDGPU_PRODUCT_NAME_LEN];
u32 addrptr;
int size, len;
-   int offset = 2;
 
if (!is_fru_eeprom_supported(adev))
return 0;
 
-   if (adev->asic_type == CHIP_ALDEBARAN)
-   offset = 0;
-
/* If algo exists, it means that the i2c_adapter's initialized */
if (!adev->pm.fru_eeprom_i2c_bus || !adev->pm.fru_eeprom_i2c_bus->algo) 
{
DRM_WARN("Cannot access FRU, EEPROM accessor not initialized");
@@ -155,8 +151,7 @@ int amdgpu_fru_get_product_info(struct amdgpu_device *adev)
AMDGPU_PRODUCT_NAME_LEN);
len = AMDGPU_PRODUCT_NAME_LEN - 1;
}
-   /* Start at 2 due to buf using fields 0 and 1 for the address */
-   memcpy(adev->product_name, [offset], len);
+   memcpy(adev->product_name, buf, len);
adev->product_name[len] = '\0';
 
addrptr += size + 1;
@@ -174,7 +169,7 @@ int amdgpu_fru_get_product_info(struct amdgpu_device *adev)
DRM_WARN("FRU Product Number is larger than 16 characters. This 
is likely a mistake");
len = sizeof(adev->product_number) - 1;
}
-   memcpy(adev->product_number, [offset], len);
+   memcpy(adev->product_number, buf, len);
adev->product_number[len] = '\0';
 
addrptr += size + 1;
@@ -201,7 +196,7 @@ int amdgpu_fru_get_product_info(struct amdgpu_device *adev)
DRM_WARN("FRU Serial Number is larger than 16 characters. This 
is likely a mistake");
len = sizeof(adev->serial) - 1;
}
-   memcpy(adev->serial, [offset], len);
+   memcpy(adev->serial, buf, len);
adev->serial[len] = '\0';
 
return 0;
-- 
2.35.0.3.gb23dac905b



[PATCH v1 1/3] drm/amdgpu: Nerf "buff" to "buf"

2022-02-03 Thread Luben Tuikov
Buffer is abbreviated "buf" (buf-fer), not "buff" (buff-er).
This is consistent with the rest of the kernel code.

Cc: Kent Russell 
Cc: Alex Deucher 
Signed-off-by: Luben Tuikov 
---
 .../gpu/drm/amd/amdgpu/amdgpu_fru_eeprom.c| 28 +--
 1 file changed, 14 insertions(+), 14 deletions(-)

diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_fru_eeprom.c 
b/drivers/gpu/drm/amd/amdgpu/amdgpu_fru_eeprom.c
index ce5d5ee336a990..792337433a9ee5 100644
--- a/drivers/gpu/drm/amd/amdgpu/amdgpu_fru_eeprom.c
+++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_fru_eeprom.c
@@ -77,11 +77,11 @@ static bool is_fru_eeprom_supported(struct amdgpu_device 
*adev)
 }
 
 static int amdgpu_fru_read_eeprom(struct amdgpu_device *adev, uint32_t addrptr,
- unsigned char *buff)
+ unsigned char *buf)
 {
int ret, size;
 
-   ret = amdgpu_eeprom_read(adev->pm.fru_eeprom_i2c_bus, addrptr, buff, 1);
+   ret = amdgpu_eeprom_read(adev->pm.fru_eeprom_i2c_bus, addrptr, buf, 1);
if (ret < 1) {
DRM_WARN("FRU: Failed to get size field");
return ret;
@@ -90,9 +90,9 @@ static int amdgpu_fru_read_eeprom(struct amdgpu_device *adev, 
uint32_t addrptr,
/* The size returned by the i2c requires subtraction of 0xC0 since the
 * size apparently always reports as 0xC0+actual size.
 */
-   size = buff[0] - I2C_PRODUCT_INFO_OFFSET;
+   size = buf[0] - I2C_PRODUCT_INFO_OFFSET;
 
-   ret = amdgpu_eeprom_read(adev->pm.fru_eeprom_i2c_bus, addrptr + 1, 
buff, size);
+   ret = amdgpu_eeprom_read(adev->pm.fru_eeprom_i2c_bus, addrptr + 1, buf, 
size);
if (ret < 1) {
DRM_WARN("FRU: Failed to get data field");
return ret;
@@ -103,7 +103,7 @@ static int amdgpu_fru_read_eeprom(struct amdgpu_device 
*adev, uint32_t addrptr,
 
 int amdgpu_fru_get_product_info(struct amdgpu_device *adev)
 {
-   unsigned char buff[AMDGPU_PRODUCT_NAME_LEN+2];
+   unsigned char buf[AMDGPU_PRODUCT_NAME_LEN+2];
u32 addrptr;
int size, len;
int offset = 2;
@@ -133,7 +133,7 @@ int amdgpu_fru_get_product_info(struct amdgpu_device *adev)
 * and the language field, so just start from 0xb, manufacturer size
 */
addrptr = FRU_EEPROM_MADDR + 0xb;
-   size = amdgpu_fru_read_eeprom(adev, addrptr, buff);
+   size = amdgpu_fru_read_eeprom(adev, addrptr, buf);
if (size < 1) {
DRM_ERROR("Failed to read FRU Manufacturer, ret:%d", size);
return -EINVAL;
@@ -143,7 +143,7 @@ int amdgpu_fru_get_product_info(struct amdgpu_device *adev)
 * size field being 1 byte. This pattern continues below.
 */
addrptr += size + 1;
-   size = amdgpu_fru_read_eeprom(adev, addrptr, buff);
+   size = amdgpu_fru_read_eeprom(adev, addrptr, buf);
if (size < 1) {
DRM_ERROR("Failed to read FRU product name, ret:%d", size);
return -EINVAL;
@@ -155,12 +155,12 @@ int amdgpu_fru_get_product_info(struct amdgpu_device 
*adev)
AMDGPU_PRODUCT_NAME_LEN);
len = AMDGPU_PRODUCT_NAME_LEN - 1;
}
-   /* Start at 2 due to buff using fields 0 and 1 for the address */
-   memcpy(adev->product_name, [offset], len);
+   /* Start at 2 due to buf using fields 0 and 1 for the address */
+   memcpy(adev->product_name, [offset], len);
adev->product_name[len] = '\0';
 
addrptr += size + 1;
-   size = amdgpu_fru_read_eeprom(adev, addrptr, buff);
+   size = amdgpu_fru_read_eeprom(adev, addrptr, buf);
if (size < 1) {
DRM_ERROR("Failed to read FRU product number, ret:%d", size);
return -EINVAL;
@@ -174,11 +174,11 @@ int amdgpu_fru_get_product_info(struct amdgpu_device 
*adev)
DRM_WARN("FRU Product Number is larger than 16 characters. This 
is likely a mistake");
len = sizeof(adev->product_number) - 1;
}
-   memcpy(adev->product_number, [offset], len);
+   memcpy(adev->product_number, [offset], len);
adev->product_number[len] = '\0';
 
addrptr += size + 1;
-   size = amdgpu_fru_read_eeprom(adev, addrptr, buff);
+   size = amdgpu_fru_read_eeprom(adev, addrptr, buf);
 
if (size < 1) {
DRM_ERROR("Failed to read FRU product version, ret:%d", size);
@@ -186,7 +186,7 @@ int amdgpu_fru_get_product_info(struct amdgpu_device *adev)
}
 
addrptr += size + 1;
-   size = amdgpu_fru_read_eeprom(adev, addrptr, buff);
+   size = amdgpu_fru_read_eeprom(adev, addrptr, buf);
 
if (size < 1) {
DRM_ERROR("Failed to read FRU serial number, ret:%d", size);
@@ -201,7 +201,7 @@ int amdgpu_fru_get_product_info(struct amdgpu_device *adev)
DRM_WARN("FRU Serial Number is larger than 16 characters. This 
is likely a mistake");
  

[PATCH v1 3/3] drm/amdgpu: Prevent random memory access in FRU code

2022-02-03 Thread Luben Tuikov
Prevent random memory access in the FRU EEPROM code by passing the size of
the destination buffer to the reading routine, and reading no more than the
size of the buffer.

Cc: Kent Russell 
Cc: Alex Deucher 
Signed-off-by: Luben Tuikov 
---
 .../gpu/drm/amd/amdgpu/amdgpu_fru_eeprom.c| 21 +++
 1 file changed, 12 insertions(+), 9 deletions(-)

diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_fru_eeprom.c 
b/drivers/gpu/drm/amd/amdgpu/amdgpu_fru_eeprom.c
index 61c4e71e399855..07e045fae83a9a 100644
--- a/drivers/gpu/drm/amd/amdgpu/amdgpu_fru_eeprom.c
+++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_fru_eeprom.c
@@ -77,9 +77,10 @@ static bool is_fru_eeprom_supported(struct amdgpu_device 
*adev)
 }
 
 static int amdgpu_fru_read_eeprom(struct amdgpu_device *adev, uint32_t addrptr,
- unsigned char *buf)
+ unsigned char *buf, size_t buf_size)
 {
-   int ret, size;
+   int ret;
+   u8 size;
 
ret = amdgpu_eeprom_read(adev->pm.fru_eeprom_i2c_bus, addrptr, buf, 1);
if (ret < 1) {
@@ -90,9 +91,11 @@ static int amdgpu_fru_read_eeprom(struct amdgpu_device 
*adev, uint32_t addrptr,
/* The size returned by the i2c requires subtraction of 0xC0 since the
 * size apparently always reports as 0xC0+actual size.
 */
-   size = buf[0] - I2C_PRODUCT_INFO_OFFSET;
+   size = buf[0] & 0x3F;
+   size = min_t(size_t, size, buf_size);
 
-   ret = amdgpu_eeprom_read(adev->pm.fru_eeprom_i2c_bus, addrptr + 1, buf, 
size);
+   ret = amdgpu_eeprom_read(adev->pm.fru_eeprom_i2c_bus, addrptr + 1,
+buf, size);
if (ret < 1) {
DRM_WARN("FRU: Failed to get data field");
return ret;
@@ -129,7 +132,7 @@ int amdgpu_fru_get_product_info(struct amdgpu_device *adev)
 * and the language field, so just start from 0xb, manufacturer size
 */
addrptr = FRU_EEPROM_MADDR + 0xb;
-   size = amdgpu_fru_read_eeprom(adev, addrptr, buf);
+   size = amdgpu_fru_read_eeprom(adev, addrptr, buf, sizeof(buf));
if (size < 1) {
DRM_ERROR("Failed to read FRU Manufacturer, ret:%d", size);
return -EINVAL;
@@ -139,7 +142,7 @@ int amdgpu_fru_get_product_info(struct amdgpu_device *adev)
 * size field being 1 byte. This pattern continues below.
 */
addrptr += size + 1;
-   size = amdgpu_fru_read_eeprom(adev, addrptr, buf);
+   size = amdgpu_fru_read_eeprom(adev, addrptr, buf, sizeof(buf));
if (size < 1) {
DRM_ERROR("Failed to read FRU product name, ret:%d", size);
return -EINVAL;
@@ -155,7 +158,7 @@ int amdgpu_fru_get_product_info(struct amdgpu_device *adev)
adev->product_name[len] = '\0';
 
addrptr += size + 1;
-   size = amdgpu_fru_read_eeprom(adev, addrptr, buf);
+   size = amdgpu_fru_read_eeprom(adev, addrptr, buf, sizeof(buf));
if (size < 1) {
DRM_ERROR("Failed to read FRU product number, ret:%d", size);
return -EINVAL;
@@ -173,7 +176,7 @@ int amdgpu_fru_get_product_info(struct amdgpu_device *adev)
adev->product_number[len] = '\0';
 
addrptr += size + 1;
-   size = amdgpu_fru_read_eeprom(adev, addrptr, buf);
+   size = amdgpu_fru_read_eeprom(adev, addrptr, buf, sizeof(buf));
 
if (size < 1) {
DRM_ERROR("Failed to read FRU product version, ret:%d", size);
@@ -181,7 +184,7 @@ int amdgpu_fru_get_product_info(struct amdgpu_device *adev)
}
 
addrptr += size + 1;
-   size = amdgpu_fru_read_eeprom(adev, addrptr, buf);
+   size = amdgpu_fru_read_eeprom(adev, addrptr, buf, sizeof(buf));
 
if (size < 1) {
DRM_ERROR("Failed to read FRU serial number, ret:%d", size);
-- 
2.35.0.3.gb23dac905b



[PATCH v1 0/3] AMDGPU FRU fixes

2022-02-03 Thread Luben Tuikov
Reordered the patches; fixed some bugs.

Luben Tuikov (3):
  drm/amdgpu: Nerf "buff" to "buf"
  drm/amdgpu: Don't offset by 2 in FRU EEPROM
  drm/amdgpu: Prevent random memory access in FRU code

Cc: Alex Deucher 
Cc: Kent Russell 
Cc: Andrey Grodzovsky 

 .../gpu/drm/amd/amdgpu/amdgpu_fru_eeprom.c| 36 +--
 1 file changed, 17 insertions(+), 19 deletions(-)

base-commit: af42455918c42274f6f317a88c878d59c4564168
-- 
2.35.0.3.gb23dac905b



RE: [Patch v5 00/24] CHECKPOINT RESTORE WITH ROCm

2022-02-03 Thread Bhardwaj, Rajneesh
[AMD Official Use Only]

Thank you Felix for the review and your guidance.

-Original Message-
From: Kuehling, Felix  
Sent: Thursday, February 3, 2022 10:22 PM
To: Bhardwaj, Rajneesh ; 
amd-gfx@lists.freedesktop.org
Cc: Yat Sin, David ; Deucher, Alexander 
; dri-de...@lists.freedesktop.org
Subject: Re: [Patch v5 00/24] CHECKPOINT RESTORE WITH ROCm

The series is

Reviewed-by: Felix Kuehling 


Am 2022-02-03 um 04:08 schrieb Rajneesh Bhardwaj:
> V5: Proposed IOCTL APIs for CRIU with consolidated feedback
>
> CRIU is a user space tool which is very popular for container live 
> migration in datacentres. It can checkpoint a running application, 
> save its complete state, memory contents and all system resources to 
> images on disk which can be migrated to another m achine and restored later.
> More information on CRIU can be found at https://criu.org/Main_Page
>
> CRIU currently does not support Checkpoint / Restore with applications 
> that have devices files open so it cannot perform checkpoint and 
> restore on GPU devices which are very complex and have their own VRAM 
> managed privately. CRIU, however can support external devices by using 
> a plugin architecture. We feel that we are getting close to finalizing 
> our IOCTL APIs which were again changed since V3 for an improved modular 
> design.
>
> Our changes to CRIU user space  are can be obtained from here:
> https://github.com/RadeonOpenCompute/criu/tree/amdgpu_rfc-211222
>
> We have tested the following scenarios:
>   - Checkpoint / Restore of a Pytorch (BERT) workload
>   - kfdtests with queues and events
>   - Gfx9 and Gfx10 based multi GPU test systems
>   - On baremetal and inside a docker container
>   - Restoring on a different system
>
> V1: Initial
> V2: Addressed review comments
> V3: Rebased on latest amd-staging-drm-next (5.15 based)
> v4: New API design and basic support for SVM, however there is an 
> outstanding issue with SVM restore which is currently under debug and 
> hopefully that won't impact the ioctl APIs as SVMs are treated as 
> private data hidden from user space like queues and events with the 
> new approch.
> V5: Fix the SVM related issues and finalize the APIs.
>
> David Yat Sin (9):
>drm/amdkfd: CRIU Implement KFD unpause operation
>drm/amdkfd: CRIU add queues support
>drm/amdkfd: CRIU restore queue ids
>drm/amdkfd: CRIU restore sdma id for queues
>drm/amdkfd: CRIU restore queue doorbell id
>drm/amdkfd: CRIU checkpoint and restore queue mqds
>drm/amdkfd: CRIU checkpoint and restore queue control stack
>drm/amdkfd: CRIU checkpoint and restore events
>drm/amdkfd: CRIU implement gpu_id remapping
>
> Rajneesh Bhardwaj (15):
>x86/configs: CRIU update debug rock defconfig
>drm/amdkfd: CRIU Introduce Checkpoint-Restore APIs
>drm/amdkfd: CRIU Implement KFD process_info ioctl
>drm/amdkfd: CRIU Implement KFD checkpoint ioctl
>drm/amdkfd: CRIU Implement KFD restore ioctl
>drm/amdkfd: CRIU Implement KFD resume ioctl
>drm/amdkfd: CRIU export BOs as prime dmabuf objects
>drm/amdkfd: CRIU checkpoint and restore xnack mode
>drm/amdkfd: CRIU allow external mm for svm ranges
>drm/amdkfd: use user_gpu_id for svm ranges
>drm/amdkfd: CRIU Discover svm ranges
>drm/amdkfd: CRIU Save Shared Virtual Memory ranges
>drm/amdkfd: CRIU prepare for svm resume
>drm/amdkfd: CRIU resume shared virtual memory ranges
>drm/amdkfd: Bump up KFD API version for CRIU
>
>   arch/x86/configs/rock-dbg_defconfig   |   53 +-
>   drivers/gpu/drm/amd/amdgpu/amdgpu_amdkfd.h|7 +-
>   .../gpu/drm/amd/amdgpu/amdgpu_amdkfd_gpuvm.c  |   64 +-
>   drivers/gpu/drm/amd/amdgpu/amdgpu_ttm.c   |   20 +
>   drivers/gpu/drm/amd/amdgpu/amdgpu_ttm.h   |2 +
>   drivers/gpu/drm/amd/amdkfd/kfd_chardev.c  | 1471 ++---
>   drivers/gpu/drm/amd/amdkfd/kfd_dbgdev.c   |2 +-
>   .../drm/amd/amdkfd/kfd_device_queue_manager.c |  185 ++-
>   .../drm/amd/amdkfd/kfd_device_queue_manager.h |   16 +-
>   drivers/gpu/drm/amd/amdkfd/kfd_events.c   |  313 +++-
>   drivers/gpu/drm/amd/amdkfd/kfd_mqd_manager.h  |   14 +
>   .../gpu/drm/amd/amdkfd/kfd_mqd_manager_cik.c  |   75 +
>   .../gpu/drm/amd/amdkfd/kfd_mqd_manager_v10.c  |   77 +
>   .../gpu/drm/amd/amdkfd/kfd_mqd_manager_v9.c   |   92 ++
>   .../gpu/drm/amd/amdkfd/kfd_mqd_manager_vi.c   |   84 +
>   drivers/gpu/drm/amd/amdkfd/kfd_priv.h |  160 +-
>   drivers/gpu/drm/amd/amdkfd/kfd_process.c  |   72 +-
>   .../amd/amdkfd/kfd_process_queue_manager.c|  372 -
>   drivers/gpu/drm/amd/amdkfd/kfd_svm.c  |  331 +++-
>   drivers/gpu/drm/amd/amdkfd/kfd_svm.h  |   39 +
>   include/uapi/linux/kfd_ioctl.h|   84 +-
>   21 files changed, 3193 insertions(+), 340 deletions(-)
>


Re: [Patch v5 00/24] CHECKPOINT RESTORE WITH ROCm

2022-02-03 Thread Felix Kuehling

The series is

Reviewed-by: Felix Kuehling 


Am 2022-02-03 um 04:08 schrieb Rajneesh Bhardwaj:

V5: Proposed IOCTL APIs for CRIU with consolidated feedback

CRIU is a user space tool which is very popular for container live
migration in datacentres. It can checkpoint a running application, save
its complete state, memory contents and all system resources to images
on disk which can be migrated to another m achine and restored later.
More information on CRIU can be found at https://criu.org/Main_Page

CRIU currently does not support Checkpoint / Restore with applications
that have devices files open so it cannot perform checkpoint and restore
on GPU devices which are very complex and have their own VRAM managed
privately. CRIU, however can support external devices by using a plugin
architecture. We feel that we are getting close to finalizing our IOCTL
APIs which were again changed since V3 for an improved modular design.

Our changes to CRIU user space  are can be obtained from here:
https://github.com/RadeonOpenCompute/criu/tree/amdgpu_rfc-211222

We have tested the following scenarios:
  - Checkpoint / Restore of a Pytorch (BERT) workload
  - kfdtests with queues and events
  - Gfx9 and Gfx10 based multi GPU test systems
  - On baremetal and inside a docker container
  - Restoring on a different system

V1: Initial
V2: Addressed review comments
V3: Rebased on latest amd-staging-drm-next (5.15 based)
v4: New API design and basic support for SVM, however there is an
outstanding issue with SVM restore which is currently under debug and
hopefully that won't impact the ioctl APIs as SVMs are treated as
private data hidden from user space like queues and events with the new
approch.
V5: Fix the SVM related issues and finalize the APIs.

David Yat Sin (9):
   drm/amdkfd: CRIU Implement KFD unpause operation
   drm/amdkfd: CRIU add queues support
   drm/amdkfd: CRIU restore queue ids
   drm/amdkfd: CRIU restore sdma id for queues
   drm/amdkfd: CRIU restore queue doorbell id
   drm/amdkfd: CRIU checkpoint and restore queue mqds
   drm/amdkfd: CRIU checkpoint and restore queue control stack
   drm/amdkfd: CRIU checkpoint and restore events
   drm/amdkfd: CRIU implement gpu_id remapping

Rajneesh Bhardwaj (15):
   x86/configs: CRIU update debug rock defconfig
   drm/amdkfd: CRIU Introduce Checkpoint-Restore APIs
   drm/amdkfd: CRIU Implement KFD process_info ioctl
   drm/amdkfd: CRIU Implement KFD checkpoint ioctl
   drm/amdkfd: CRIU Implement KFD restore ioctl
   drm/amdkfd: CRIU Implement KFD resume ioctl
   drm/amdkfd: CRIU export BOs as prime dmabuf objects
   drm/amdkfd: CRIU checkpoint and restore xnack mode
   drm/amdkfd: CRIU allow external mm for svm ranges
   drm/amdkfd: use user_gpu_id for svm ranges
   drm/amdkfd: CRIU Discover svm ranges
   drm/amdkfd: CRIU Save Shared Virtual Memory ranges
   drm/amdkfd: CRIU prepare for svm resume
   drm/amdkfd: CRIU resume shared virtual memory ranges
   drm/amdkfd: Bump up KFD API version for CRIU

  arch/x86/configs/rock-dbg_defconfig   |   53 +-
  drivers/gpu/drm/amd/amdgpu/amdgpu_amdkfd.h|7 +-
  .../gpu/drm/amd/amdgpu/amdgpu_amdkfd_gpuvm.c  |   64 +-
  drivers/gpu/drm/amd/amdgpu/amdgpu_ttm.c   |   20 +
  drivers/gpu/drm/amd/amdgpu/amdgpu_ttm.h   |2 +
  drivers/gpu/drm/amd/amdkfd/kfd_chardev.c  | 1471 ++---
  drivers/gpu/drm/amd/amdkfd/kfd_dbgdev.c   |2 +-
  .../drm/amd/amdkfd/kfd_device_queue_manager.c |  185 ++-
  .../drm/amd/amdkfd/kfd_device_queue_manager.h |   16 +-
  drivers/gpu/drm/amd/amdkfd/kfd_events.c   |  313 +++-
  drivers/gpu/drm/amd/amdkfd/kfd_mqd_manager.h  |   14 +
  .../gpu/drm/amd/amdkfd/kfd_mqd_manager_cik.c  |   75 +
  .../gpu/drm/amd/amdkfd/kfd_mqd_manager_v10.c  |   77 +
  .../gpu/drm/amd/amdkfd/kfd_mqd_manager_v9.c   |   92 ++
  .../gpu/drm/amd/amdkfd/kfd_mqd_manager_vi.c   |   84 +
  drivers/gpu/drm/amd/amdkfd/kfd_priv.h |  160 +-
  drivers/gpu/drm/amd/amdkfd/kfd_process.c  |   72 +-
  .../amd/amdkfd/kfd_process_queue_manager.c|  372 -
  drivers/gpu/drm/amd/amdkfd/kfd_svm.c  |  331 +++-
  drivers/gpu/drm/amd/amdkfd/kfd_svm.h  |   39 +
  include/uapi/linux/kfd_ioctl.h|   84 +-
  21 files changed, 3193 insertions(+), 340 deletions(-)



[PATCH] drm/amdgpu: Fix recursive locking warning

2022-02-03 Thread Rajneesh Bhardwaj
Noticed the below warning while running a pytorch workload on vega10
GPUs. Change to trylock to avoid conflicts with already held reservation
locks.

[  +0.03] WARNING: possible recursive locking detected
[  +0.03] 5.13.0-kfd-rajneesh #1030 Not tainted
[  +0.04] 
[  +0.02] python/4822 is trying to acquire lock:
[  +0.04] 932cd9a259f8 (reservation_ww_class_mutex){+.+.}-{3:3},
at: amdgpu_bo_release_notify+0xc4/0x160 [amdgpu]
[  +0.000203]
  but task is already holding lock:
[  +0.03] 932cbb7181f8 (reservation_ww_class_mutex){+.+.}-{3:3},
at: ttm_eu_reserve_buffers+0x270/0x470 [ttm]
[  +0.17]
  other info that might help us debug this:
[  +0.02]  Possible unsafe locking scenario:

[  +0.03]CPU0
[  +0.02]
[  +0.02]   lock(reservation_ww_class_mutex);
[  +0.04]   lock(reservation_ww_class_mutex);
[  +0.03]
   *** DEADLOCK ***

[  +0.02]  May be due to missing lock nesting notation

[  +0.03] 7 locks held by python/4822:
[  +0.03]  #0: 932c4ac028d0 (>mutex){+.+.}-{3:3}, at:
kfd_ioctl_map_memory_to_gpu+0x10b/0x320 [amdgpu]
[  +0.000232]  #1: 932c55e830a8 (>lock#2){+.+.}-{3:3}, at:
amdgpu_amdkfd_gpuvm_map_memory_to_gpu+0x64/0xf60 [amdgpu]
[  +0.000241]  #2: 932cc45b5e68 (&(*mem)->lock){+.+.}-{3:3}, at:
amdgpu_amdkfd_gpuvm_map_memory_to_gpu+0xdf/0xf60 [amdgpu]
[  +0.000236]  #3: b2b35606fd28
(reservation_ww_class_acquire){+.+.}-{0:0}, at:
amdgpu_amdkfd_gpuvm_map_memory_to_gpu+0x232/0xf60 [amdgpu]
[  +0.000235]  #4: 932cbb7181f8
(reservation_ww_class_mutex){+.+.}-{3:3}, at:
ttm_eu_reserve_buffers+0x270/0x470 [ttm]
[  +0.15]  #5: c045f700 (*(sspp++)){}-{0:0}, at:
drm_dev_enter+0x5/0xa0 [drm]
[  +0.38]  #6: 932c52da7078 (>eviction_lock){+.+.}-{3:3},
at: amdgpu_vm_bo_update_mapping+0xd5/0x4f0 [amdgpu]
[  +0.000195]
  stack backtrace:
[  +0.03] CPU: 11 PID: 4822 Comm: python Not tainted
5.13.0-kfd-rajneesh #1030
[  +0.05] Hardware name: GIGABYTE MZ01-CE0-00/MZ01-CE0-00, BIOS F02
08/29/2018
[  +0.03] Call Trace:
[  +0.03]  dump_stack+0x6d/0x89
[  +0.10]  __lock_acquire+0xb93/0x1a90
[  +0.09]  lock_acquire+0x25d/0x2d0
[  +0.05]  ? amdgpu_bo_release_notify+0xc4/0x160 [amdgpu]
[  +0.000184]  ? lock_is_held_type+0xa2/0x110
[  +0.06]  ? amdgpu_bo_release_notify+0xc4/0x160 [amdgpu]
[  +0.000184]  __ww_mutex_lock.constprop.17+0xca/0x1060
[  +0.07]  ? amdgpu_bo_release_notify+0xc4/0x160 [amdgpu]
[  +0.000183]  ? lock_release+0x13f/0x270
[  +0.05]  ? lock_is_held_type+0xa2/0x110
[  +0.06]  ? amdgpu_bo_release_notify+0xc4/0x160 [amdgpu]
[  +0.000183]  amdgpu_bo_release_notify+0xc4/0x160 [amdgpu]
[  +0.000185]  ttm_bo_release+0x4c6/0x580 [ttm]
[  +0.10]  amdgpu_bo_unref+0x1a/0x30 [amdgpu]
[  +0.000183]  amdgpu_vm_free_table+0x76/0xa0 [amdgpu]
[  +0.000189]  amdgpu_vm_free_pts+0xb8/0xf0 [amdgpu]
[  +0.000189]  amdgpu_vm_update_ptes+0x411/0x770 [amdgpu]
[  +0.000191]  amdgpu_vm_bo_update_mapping+0x324/0x4f0 [amdgpu]
[  +0.000191]  amdgpu_vm_bo_update+0x251/0x610 [amdgpu]
[  +0.000191]  update_gpuvm_pte+0xcc/0x290 [amdgpu]
[  +0.000229]  ? amdgpu_vm_bo_map+0xd7/0x130 [amdgpu]
[  +0.000190]  amdgpu_amdkfd_gpuvm_map_memory_to_gpu+0x912/0xf60
[amdgpu]
[  +0.000234]  kfd_ioctl_map_memory_to_gpu+0x182/0x320 [amdgpu]
[  +0.000218]  kfd_ioctl+0x2b9/0x600 [amdgpu]
[  +0.000216]  ? kfd_ioctl_unmap_memory_from_gpu+0x270/0x270 [amdgpu]
[  +0.000216]  ? lock_release+0x13f/0x270
[  +0.06]  ? __fget_files+0x107/0x1e0
[  +0.07]  __x64_sys_ioctl+0x8b/0xd0
[  +0.07]  do_syscall_64+0x36/0x70
[  +0.04]  entry_SYSCALL_64_after_hwframe+0x44/0xae
[  +0.07] RIP: 0033:0x7fbff90a7317
[  +0.04] Code: b3 66 90 48 8b 05 71 4b 2d 00 64 c7 00 26 00 00 00
48 c7 c0 ff ff ff ff c3 66 2e 0f 1f 84 00 00 00 00 00 b8 10 00 00 00 0f
05 <48> 3d 01 f0 ff ff 73 01 c3 48 8b 0d 41 4b 2d 00 f7 d8 64 89 01 48
[  +0.05] RSP: 002b:7fbe301fe648 EFLAGS: 0246 ORIG_RAX:
0010
[  +0.06] RAX: ffda RBX: 7fbcc402d820 RCX:
7fbff90a7317
[  +0.03] RDX: 7fbe301fe690 RSI: c0184b18 RDI:
0004
[  +0.03] RBP: 7fbe301fe690 R08:  R09:
7fbcc402d880
[  +0.03] R10: 02001000 R11: 0246 R12:
c0184b18
[  +0.03] R13: 0004 R14: 7fbf689593a0 R15:
7fbcc402d820

Cc: Christian König 
Cc: Felix Kuehling 
Cc: Alex Deucher 

Fixes: 627b92ef9d7c ("drm/amdgpu: Wipe all VRAM on free when RAS is
enabled")
Signed-off-by: Rajneesh Bhardwaj 
---
 drivers/gpu/drm/amd/amdgpu/amdgpu_object.c | 3 ++-
 1 file changed, 2 insertions(+), 1 deletion(-)

diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_object.c 
b/drivers/gpu/drm/amd/amdgpu/amdgpu_object.c
index 36bb41b027ec..6ccd2be685f5 100644
--- a/drivers/gpu/drm/amd/amdgpu/amdgpu_object.c
+++ 

[PATCH 2/3] drm/amdgpu: Nerf "buff" to "buf"

2022-02-03 Thread Luben Tuikov
Buffer is abbreviated "buf", not "buff", which
means something entirely different.

Cc: Kent Russell 
Cc: Alex Deucher 
Signed-off-by: Luben Tuikov 
---
 .../gpu/drm/amd/amdgpu/amdgpu_fru_eeprom.c| 22 +--
 1 file changed, 11 insertions(+), 11 deletions(-)

diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_fru_eeprom.c 
b/drivers/gpu/drm/amd/amdgpu/amdgpu_fru_eeprom.c
index 32f38d0dd43dd9..e56d2c79b444bb 100644
--- a/drivers/gpu/drm/amd/amdgpu/amdgpu_fru_eeprom.c
+++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_fru_eeprom.c
@@ -77,11 +77,11 @@ static bool is_fru_eeprom_supported(struct amdgpu_device 
*adev)
 }
 
 static int amdgpu_fru_read_eeprom(struct amdgpu_device *adev, uint32_t addrptr,
- unsigned char *buff)
+ unsigned char *buf)
 {
int ret, size;
 
-   ret = amdgpu_eeprom_read(adev->pm.fru_eeprom_i2c_bus, addrptr, buff, 1);
+   ret = amdgpu_eeprom_read(adev->pm.fru_eeprom_i2c_bus, addrptr, buf, 1);
if (ret < 1) {
DRM_WARN("FRU: Failed to get size field");
return ret;
@@ -90,9 +90,9 @@ static int amdgpu_fru_read_eeprom(struct amdgpu_device *adev, 
uint32_t addrptr,
/* The size returned by the i2c requires subtraction of 0xC0 since the
 * size apparently always reports as 0xC0+actual size.
 */
-   size = buff[0] - I2C_PRODUCT_INFO_OFFSET;
+   size = buf[0] - I2C_PRODUCT_INFO_OFFSET;
 
-   ret = amdgpu_eeprom_read(adev->pm.fru_eeprom_i2c_bus, addrptr + 1, 
buff, size);
+   ret = amdgpu_eeprom_read(adev->pm.fru_eeprom_i2c_bus, addrptr + 1, buf, 
size);
if (ret < 1) {
DRM_WARN("FRU: Failed to get data field");
return ret;
@@ -129,7 +129,7 @@ int amdgpu_fru_get_product_info(struct amdgpu_device *adev)
 * and the language field, so just start from 0xb, manufacturer size
 */
addrptr = FRU_EEPROM_MADDR + 0xb;
-   size = amdgpu_fru_read_eeprom(adev, addrptr, buff);
+   size = amdgpu_fru_read_eeprom(adev, addrptr, buf);
if (size < 1) {
DRM_ERROR("Failed to read FRU Manufacturer, ret:%d", size);
return -EINVAL;
@@ -139,7 +139,7 @@ int amdgpu_fru_get_product_info(struct amdgpu_device *adev)
 * size field being 1 byte. This pattern continues below.
 */
addrptr += size + 1;
-   size = amdgpu_fru_read_eeprom(adev, addrptr, buff);
+   size = amdgpu_fru_read_eeprom(adev, addrptr, buf);
if (size < 1) {
DRM_ERROR("Failed to read FRU product name, ret:%d", size);
return -EINVAL;
@@ -156,7 +156,7 @@ int amdgpu_fru_get_product_info(struct amdgpu_device *adev)
adev->product_name[len] = '\0';
 
addrptr += size + 1;
-   size = amdgpu_fru_read_eeprom(adev, addrptr, buff);
+   size = amdgpu_fru_read_eeprom(adev, addrptr, buf);
if (size < 1) {
DRM_ERROR("Failed to read FRU product number, ret:%d", size);
return -EINVAL;
@@ -170,11 +170,11 @@ int amdgpu_fru_get_product_info(struct amdgpu_device 
*adev)
DRM_WARN("FRU Product Number is larger than 16 characters. This 
is likely a mistake");
len = sizeof(adev->product_number) - 1;
}
-   memcpy(adev->product_number, buff, len);
+   memcpy(adev->product_number, buf, len);
adev->product_number[len] = '\0';
 
addrptr += size + 1;
-   size = amdgpu_fru_read_eeprom(adev, addrptr, buff);
+   size = amdgpu_fru_read_eeprom(adev, addrptr, buf);
 
if (size < 1) {
DRM_ERROR("Failed to read FRU product version, ret:%d", size);
@@ -182,7 +182,7 @@ int amdgpu_fru_get_product_info(struct amdgpu_device *adev)
}
 
addrptr += size + 1;
-   size = amdgpu_fru_read_eeprom(adev, addrptr, buff);
+   size = amdgpu_fru_read_eeprom(adev, addrptr, buf);
 
if (size < 1) {
DRM_ERROR("Failed to read FRU serial number, ret:%d", size);
@@ -197,7 +197,7 @@ int amdgpu_fru_get_product_info(struct amdgpu_device *adev)
DRM_WARN("FRU Serial Number is larger than 16 characters. This 
is likely a mistake");
len = sizeof(adev->serial) - 1;
}
-   memcpy(adev->serial, , len);
+   memcpy(adev->serial, , len);
adev->serial[len] = '\0';
 
return 0;
-- 
2.35.0.3.gb23dac905b



[PATCH 3/3] drm/amdgpu: Prevent random memory access in FRU code

2022-02-03 Thread Luben Tuikov
Prevent random memory access in the FRU EEPROM code by passing the size of
the destination buffer to the reading routine, and reading no more than the
size of the buffer.

Cc: Kent Russell 
Cc: Alex Deucher 
Signed-off-by: Luben Tuikov 
---
 .../gpu/drm/amd/amdgpu/amdgpu_fru_eeprom.c| 21 +++
 1 file changed, 12 insertions(+), 9 deletions(-)

diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_fru_eeprom.c 
b/drivers/gpu/drm/amd/amdgpu/amdgpu_fru_eeprom.c
index e56d2c79b444bb..d9cc955579fa0b 100644
--- a/drivers/gpu/drm/amd/amdgpu/amdgpu_fru_eeprom.c
+++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_fru_eeprom.c
@@ -77,9 +77,10 @@ static bool is_fru_eeprom_supported(struct amdgpu_device 
*adev)
 }
 
 static int amdgpu_fru_read_eeprom(struct amdgpu_device *adev, uint32_t addrptr,
- unsigned char *buf)
+ unsigned char *buf, size_t buf_size)
 {
-   int ret, size;
+   int ret;
+   u8 size;
 
ret = amdgpu_eeprom_read(adev->pm.fru_eeprom_i2c_bus, addrptr, buf, 1);
if (ret < 1) {
@@ -90,9 +91,11 @@ static int amdgpu_fru_read_eeprom(struct amdgpu_device 
*adev, uint32_t addrptr,
/* The size returned by the i2c requires subtraction of 0xC0 since the
 * size apparently always reports as 0xC0+actual size.
 */
-   size = buf[0] - I2C_PRODUCT_INFO_OFFSET;
+   size = buf[0] & 0x3F;
+   size = min_t(size_t, size, buf_size);
 
-   ret = amdgpu_eeprom_read(adev->pm.fru_eeprom_i2c_bus, addrptr + 1, buf, 
size);
+   ret = amdgpu_eeprom_read(adev->pm.fru_eeprom_i2c_bus, addrptr + 1,
+buf, size);
if (ret < 1) {
DRM_WARN("FRU: Failed to get data field");
return ret;
@@ -129,7 +132,7 @@ int amdgpu_fru_get_product_info(struct amdgpu_device *adev)
 * and the language field, so just start from 0xb, manufacturer size
 */
addrptr = FRU_EEPROM_MADDR + 0xb;
-   size = amdgpu_fru_read_eeprom(adev, addrptr, buf);
+   size = amdgpu_fru_read_eeprom(adev, addrptr, buf, sizeof(buf));
if (size < 1) {
DRM_ERROR("Failed to read FRU Manufacturer, ret:%d", size);
return -EINVAL;
@@ -139,7 +142,7 @@ int amdgpu_fru_get_product_info(struct amdgpu_device *adev)
 * size field being 1 byte. This pattern continues below.
 */
addrptr += size + 1;
-   size = amdgpu_fru_read_eeprom(adev, addrptr, buf);
+   size = amdgpu_fru_read_eeprom(adev, addrptr, buf, sizeof(buf));
if (size < 1) {
DRM_ERROR("Failed to read FRU product name, ret:%d", size);
return -EINVAL;
@@ -156,7 +159,7 @@ int amdgpu_fru_get_product_info(struct amdgpu_device *adev)
adev->product_name[len] = '\0';
 
addrptr += size + 1;
-   size = amdgpu_fru_read_eeprom(adev, addrptr, buf);
+   size = amdgpu_fru_read_eeprom(adev, addrptr, buf, sizeof(buf));
if (size < 1) {
DRM_ERROR("Failed to read FRU product number, ret:%d", size);
return -EINVAL;
@@ -174,7 +177,7 @@ int amdgpu_fru_get_product_info(struct amdgpu_device *adev)
adev->product_number[len] = '\0';
 
addrptr += size + 1;
-   size = amdgpu_fru_read_eeprom(adev, addrptr, buf);
+   size = amdgpu_fru_read_eeprom(adev, addrptr, buf, sizeof(buf));
 
if (size < 1) {
DRM_ERROR("Failed to read FRU product version, ret:%d", size);
@@ -182,7 +185,7 @@ int amdgpu_fru_get_product_info(struct amdgpu_device *adev)
}
 
addrptr += size + 1;
-   size = amdgpu_fru_read_eeprom(adev, addrptr, buf);
+   size = amdgpu_fru_read_eeprom(adev, addrptr, buf, sizeof(buf));
 
if (size < 1) {
DRM_ERROR("Failed to read FRU serial number, ret:%d", size);
-- 
2.35.0.3.gb23dac905b



[PATCH 1/3] drm/amdgpu: Don't offset by 2 in FRU EEPROM

2022-02-03 Thread Luben Tuikov
Read buffers no longer expose the I2C address, and so we don't need to
offset by two when we get the read data.

Cc: Alex Deucher 
Cc: Kent Russell 
Cc: Andrey Grodzovsky 
Fixes: bd607166af7fe3 ("drm/amdgpu: Enable reading FRU chip via I2C v3")
Signed-off-by: Luben Tuikov 
---
 drivers/gpu/drm/amd/amdgpu/amdgpu_fru_eeprom.c | 14 +-
 1 file changed, 5 insertions(+), 9 deletions(-)

diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_fru_eeprom.c 
b/drivers/gpu/drm/amd/amdgpu/amdgpu_fru_eeprom.c
index ce5d5ee336a990..32f38d0dd43dd9 100644
--- a/drivers/gpu/drm/amd/amdgpu/amdgpu_fru_eeprom.c
+++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_fru_eeprom.c
@@ -103,17 +103,13 @@ static int amdgpu_fru_read_eeprom(struct amdgpu_device 
*adev, uint32_t addrptr,
 
 int amdgpu_fru_get_product_info(struct amdgpu_device *adev)
 {
-   unsigned char buff[AMDGPU_PRODUCT_NAME_LEN+2];
+   unsigned char buff[AMDGPU_PRODUCT_NAME_LEN];
u32 addrptr;
int size, len;
-   int offset = 2;
 
if (!is_fru_eeprom_supported(adev))
return 0;
 
-   if (adev->asic_type == CHIP_ALDEBARAN)
-   offset = 0;
-
/* If algo exists, it means that the i2c_adapter's initialized */
if (!adev->pm.fru_eeprom_i2c_bus || !adev->pm.fru_eeprom_i2c_bus->algo) 
{
DRM_WARN("Cannot access FRU, EEPROM accessor not initialized");
@@ -155,8 +151,8 @@ int amdgpu_fru_get_product_info(struct amdgpu_device *adev)
AMDGPU_PRODUCT_NAME_LEN);
len = AMDGPU_PRODUCT_NAME_LEN - 1;
}
-   /* Start at 2 due to buff using fields 0 and 1 for the address */
-   memcpy(adev->product_name, [offset], len);
+
+   memcpy(adev->product_name, buff, len);
adev->product_name[len] = '\0';
 
addrptr += size + 1;
@@ -174,7 +170,7 @@ int amdgpu_fru_get_product_info(struct amdgpu_device *adev)
DRM_WARN("FRU Product Number is larger than 16 characters. This 
is likely a mistake");
len = sizeof(adev->product_number) - 1;
}
-   memcpy(adev->product_number, [offset], len);
+   memcpy(adev->product_number, buff, len);
adev->product_number[len] = '\0';
 
addrptr += size + 1;
@@ -201,7 +197,7 @@ int amdgpu_fru_get_product_info(struct amdgpu_device *adev)
DRM_WARN("FRU Serial Number is larger than 16 characters. This 
is likely a mistake");
len = sizeof(adev->serial) - 1;
}
-   memcpy(adev->serial, [offset], len);
+   memcpy(adev->serial, , len);
adev->serial[len] = '\0';
 
return 0;

base-commit: 1b768224871f72e594f41eded3a14d682e39f796
-- 
2.35.0.3.gb23dac905b



Re: [PATCH v1] drm/amdgpu: Print once if RAS unsupported

2022-02-03 Thread Alex Deucher
On Thu, Feb 3, 2022 at 6:14 PM Luben Tuikov  wrote:
>
> MESA polls for errors every 2-3 seconds. Printing with dev_info() causes
> the dmesg log to fill up with the same message, e.g,
>
> [18028.206676] amdgpu :0b:00.0: amdgpu: df doesn't config ras function.
>
> Make it dev_dbg_once(), as it isn't something correctible during boot or
> thereafter, so printing just once is sufficient. Also sanitize the message.
>
> Cc: Alex Deucher 
> Cc: Hawking Zhang 
> Cc: John Clements 
> Cc: Tao Zhou 
> Cc: yipechai 
> Fixes: e93ea3d0cf434b ("drm/amdgpu: Modify gfx block to fit for the unified 
> ras block data and ops")
> Signed-off-by: Luben Tuikov 

Reviewed-by: Alex Deucher 

> ---
>  drivers/gpu/drm/amd/amdgpu/amdgpu_ras.c | 16 
>  1 file changed, 8 insertions(+), 8 deletions(-)
>
> diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_ras.c 
> b/drivers/gpu/drm/amd/amdgpu/amdgpu_ras.c
> index 9d7c778c1a2d8e..e440a5268acecf 100644
> --- a/drivers/gpu/drm/amd/amdgpu/amdgpu_ras.c
> +++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_ras.c
> @@ -952,8 +952,8 @@ int amdgpu_ras_query_error_status(struct amdgpu_device 
> *adev,
> } else {
> block_obj = amdgpu_ras_get_ras_block(adev, info->head.block, 
> 0);
> if (!block_obj || !block_obj->hw_ops)   {
> -   dev_info(adev->dev, "%s doesn't config ras 
> function.\n",
> -   get_ras_block_str(>head));
> +   dev_dbg_once(adev->dev, "%s doesn't config RAS 
> function\n",
> +get_ras_block_str(>head));
> return -EINVAL;
> }
>
> @@ -1028,8 +1028,8 @@ int amdgpu_ras_reset_error_status(struct amdgpu_device 
> *adev,
> return -EINVAL;
>
> if (!block_obj || !block_obj->hw_ops)   {
> -   dev_info(adev->dev, "%s doesn't config ras function.\n",
> -   ras_block_str(block));
> +   dev_dbg_once(adev->dev, "%s doesn't config RAS function\n",
> +ras_block_str(block));
> return -EINVAL;
> }
>
> @@ -1066,8 +1066,8 @@ int amdgpu_ras_error_inject(struct amdgpu_device *adev,
> return -EINVAL;
>
> if (!block_obj || !block_obj->hw_ops)   {
> -   dev_info(adev->dev, "%s doesn't config ras function.\n",
> -   get_ras_block_str(>head));
> +   dev_dbg_once(adev->dev, "%s doesn't config RAS function\n",
> +get_ras_block_str(>head));
> return -EINVAL;
> }
>
> @@ -1717,8 +1717,8 @@ static void amdgpu_ras_error_status_query(struct 
> amdgpu_device *adev,
> info->head.sub_block_index);
>
> if (!block_obj || !block_obj->hw_ops) {
> -   dev_info(adev->dev, "%s doesn't config ras function.\n",
> -   get_ras_block_str(>head));
> +   dev_dbg_once(adev->dev, "%s doesn't config RAS function\n",
> +get_ras_block_str(>head));
> return;
> }
>
>
> base-commit: cf33ae90884f254d683436fc2538b99dc4932447
> --
> 2.35.0.3.gb23dac905b
>


[PATCH v1] drm/amdgpu: Print once if RAS unsupported

2022-02-03 Thread Luben Tuikov
MESA polls for errors every 2-3 seconds. Printing with dev_info() causes
the dmesg log to fill up with the same message, e.g,

[18028.206676] amdgpu :0b:00.0: amdgpu: df doesn't config ras function.

Make it dev_dbg_once(), as it isn't something correctible during boot or
thereafter, so printing just once is sufficient. Also sanitize the message.

Cc: Alex Deucher 
Cc: Hawking Zhang 
Cc: John Clements 
Cc: Tao Zhou 
Cc: yipechai 
Fixes: e93ea3d0cf434b ("drm/amdgpu: Modify gfx block to fit for the unified ras 
block data and ops")
Signed-off-by: Luben Tuikov 
---
 drivers/gpu/drm/amd/amdgpu/amdgpu_ras.c | 16 
 1 file changed, 8 insertions(+), 8 deletions(-)

diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_ras.c 
b/drivers/gpu/drm/amd/amdgpu/amdgpu_ras.c
index 9d7c778c1a2d8e..e440a5268acecf 100644
--- a/drivers/gpu/drm/amd/amdgpu/amdgpu_ras.c
+++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_ras.c
@@ -952,8 +952,8 @@ int amdgpu_ras_query_error_status(struct amdgpu_device 
*adev,
} else {
block_obj = amdgpu_ras_get_ras_block(adev, info->head.block, 0);
if (!block_obj || !block_obj->hw_ops)   {
-   dev_info(adev->dev, "%s doesn't config ras function.\n",
-   get_ras_block_str(>head));
+   dev_dbg_once(adev->dev, "%s doesn't config RAS 
function\n",
+get_ras_block_str(>head));
return -EINVAL;
}
 
@@ -1028,8 +1028,8 @@ int amdgpu_ras_reset_error_status(struct amdgpu_device 
*adev,
return -EINVAL;
 
if (!block_obj || !block_obj->hw_ops)   {
-   dev_info(adev->dev, "%s doesn't config ras function.\n",
-   ras_block_str(block));
+   dev_dbg_once(adev->dev, "%s doesn't config RAS function\n",
+ras_block_str(block));
return -EINVAL;
}
 
@@ -1066,8 +1066,8 @@ int amdgpu_ras_error_inject(struct amdgpu_device *adev,
return -EINVAL;
 
if (!block_obj || !block_obj->hw_ops)   {
-   dev_info(adev->dev, "%s doesn't config ras function.\n",
-   get_ras_block_str(>head));
+   dev_dbg_once(adev->dev, "%s doesn't config RAS function\n",
+get_ras_block_str(>head));
return -EINVAL;
}
 
@@ -1717,8 +1717,8 @@ static void amdgpu_ras_error_status_query(struct 
amdgpu_device *adev,
info->head.sub_block_index);
 
if (!block_obj || !block_obj->hw_ops) {
-   dev_info(adev->dev, "%s doesn't config ras function.\n",
-   get_ras_block_str(>head));
+   dev_dbg_once(adev->dev, "%s doesn't config RAS function\n",
+get_ras_block_str(>head));
return;
}
 

base-commit: cf33ae90884f254d683436fc2538b99dc4932447
-- 
2.35.0.3.gb23dac905b



Re: [PATCH] drm/amd/display: Cap pflip irqs per max otg number

2022-02-03 Thread Kazlauskas, Nicholas

On 2/3/2022 5:14 PM, roman...@amd.com wrote:

From: Roman Li 

[Why]
pflip interrupt order are mapped 1 to 1 to otg id.
e.g. if irq_src=26 corresponds to otg0 then 27->otg1, 28->otg2...

Linux DM registers pflip interrupts per number of crtcs.
In fused pipe case crtc numbers can be less than otg id.

e.g. if one pipe out of 3(otg#0-2) is fused adev->mode_info.num_crtc=2
so DM only registers irq_src 26,27.
This is a bug since if pipe#2 remains unfused DM never gets
otg2 pflip interrupt (irq_src=28)
That may results in gfx failure due to pflip timeout.

[How]
Register pflip interrupts per max num of otg instead of num_crtc

Signed-off-by: Roman Li 


Reviewed-by: Nicholas Kazlauskas 

Regards,
Nicholas Kazlauskas


---
  drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm.c | 2 +-
  drivers/gpu/drm/amd/display/dc/core/dc.c  | 2 ++
  drivers/gpu/drm/amd/display/dc/dc.h   | 1 +
  3 files changed, 4 insertions(+), 1 deletion(-)

diff --git a/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm.c 
b/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm.c
index 8f53c9f..10ca3fc 100644
--- a/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm.c
+++ b/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm.c
@@ -3646,7 +3646,7 @@ static int dcn10_register_irq_handlers(struct 
amdgpu_device *adev)
  
  	/* Use GRPH_PFLIP interrupt */

for (i = DCN_1_0__SRCID__HUBP0_FLIP_INTERRUPT;
-   i <= DCN_1_0__SRCID__HUBP0_FLIP_INTERRUPT + 
adev->mode_info.num_crtc - 1;
+   i <= DCN_1_0__SRCID__HUBP0_FLIP_INTERRUPT + 
dc->caps.max_otg_num - 1;
i++) {
r = amdgpu_irq_add_id(adev, SOC15_IH_CLIENTID_DCE, i, 
>pageflip_irq);
if (r) {
diff --git a/drivers/gpu/drm/amd/display/dc/core/dc.c 
b/drivers/gpu/drm/amd/display/dc/core/dc.c
index 1d9404f..70a0b89 100644
--- a/drivers/gpu/drm/amd/display/dc/core/dc.c
+++ b/drivers/gpu/drm/amd/display/dc/core/dc.c
@@ -1220,6 +1220,8 @@ struct dc *dc_create(const struct dc_init_data 
*init_params)
  
  		dc->caps.max_dp_protocol_version = DP_VERSION_1_4;
  
+		dc->caps.max_otg_num = dc->res_pool->res_cap->num_timing_generator;

+
if (dc->res_pool->dmcu != NULL)
dc->versions.dmcu_version = 
dc->res_pool->dmcu->dmcu_version;
}
diff --git a/drivers/gpu/drm/amd/display/dc/dc.h 
b/drivers/gpu/drm/amd/display/dc/dc.h
index 69d264d..af05877 100644
--- a/drivers/gpu/drm/amd/display/dc/dc.h
+++ b/drivers/gpu/drm/amd/display/dc/dc.h
@@ -200,6 +200,7 @@ struct dc_caps {
bool edp_dsc_support;
bool vbios_lttpr_aware;
bool vbios_lttpr_enable;
+   uint32_t max_otg_num;
  };
  
  struct dc_bug_wa {




Re: [PATCH] drm/amdgpu: Print once if RAS unsupported

2022-02-03 Thread Deucher, Alexander
[AMD Official Use Only]

We can probably just make these dev_dbg().  The vast majority of cards are 
non-RAS.  No need to print this at all in most cases.

Alex


From: Tuikov, Luben 
Sent: Thursday, February 3, 2022 5:14 PM
To: amd-gfx@lists.freedesktop.org 
Cc: Tuikov, Luben ; Deucher, Alexander 
; Zhang, Hawking ; Clements, 
John ; Zhou1, Tao ; Chai, Thomas 

Subject: [PATCH] drm/amdgpu: Print once if RAS unsupported

MESA polls for errors every 2-3 seconds. Printing with dev_info() causes
the dmesg log to fill up with the same message, e.g,

[18028.206676] amdgpu :0b:00.0: amdgpu: df doesn't config ras function.

Make it dev_info_once(), as it isn't something correctible during boot, so
printing just once is sufficient. Also sanitize the message.

Cc: Alex Deucher 
Cc: Hawking Zhang 
Cc: John Clements 
Cc: Tao Zhou 
Cc: yipechai 
Fixes: e93ea3d0cf434b ("drm/amdgpu: Modify gfx block to fit for the unified ras 
block data and ops")
Signed-off-by: Luben Tuikov 
---
 drivers/gpu/drm/amd/amdgpu/amdgpu_ras.c | 16 
 1 file changed, 8 insertions(+), 8 deletions(-)

diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_ras.c 
b/drivers/gpu/drm/amd/amdgpu/amdgpu_ras.c
index 9d7c778c1a2d8e..cddbfbb1d6447a 100644
--- a/drivers/gpu/drm/amd/amdgpu/amdgpu_ras.c
+++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_ras.c
@@ -952,8 +952,8 @@ int amdgpu_ras_query_error_status(struct amdgpu_device 
*adev,
 } else {
 block_obj = amdgpu_ras_get_ras_block(adev, info->head.block, 
0);
 if (!block_obj || !block_obj->hw_ops)   {
-   dev_info(adev->dev, "%s doesn't config ras function.\n",
-   get_ras_block_str(>head));
+   dev_info_once(adev->dev, "%s doesn't config RAS 
function\n",
+ get_ras_block_str(>head));
 return -EINVAL;
 }

@@ -1028,8 +1028,8 @@ int amdgpu_ras_reset_error_status(struct amdgpu_device 
*adev,
 return -EINVAL;

 if (!block_obj || !block_obj->hw_ops)   {
-   dev_info(adev->dev, "%s doesn't config ras function.\n",
-   ras_block_str(block));
+   dev_info_once(adev->dev, "%s doesn't config RAS function\n",
+ ras_block_str(block));
 return -EINVAL;
 }

@@ -1066,8 +1066,8 @@ int amdgpu_ras_error_inject(struct amdgpu_device *adev,
 return -EINVAL;

 if (!block_obj || !block_obj->hw_ops)   {
-   dev_info(adev->dev, "%s doesn't config ras function.\n",
-   get_ras_block_str(>head));
+   dev_info_once(adev->dev, "%s doesn't config RAS function\n",
+ get_ras_block_str(>head));
 return -EINVAL;
 }

@@ -1717,8 +1717,8 @@ static void amdgpu_ras_error_status_query(struct 
amdgpu_device *adev,
 info->head.sub_block_index);

 if (!block_obj || !block_obj->hw_ops) {
-   dev_info(adev->dev, "%s doesn't config ras function.\n",
-   get_ras_block_str(>head));
+   dev_info_once(adev->dev, "%s doesn't config RAS function\n",
+ get_ras_block_str(>head));
 return;
 }


base-commit: cf33ae90884f254d683436fc2538b99dc4932447
--
2.35.0.3.gb23dac905b



[PATCH] drm/amd/display: Cap pflip irqs per max otg number

2022-02-03 Thread Roman.Li
From: Roman Li 

[Why]
pflip interrupt order are mapped 1 to 1 to otg id.
e.g. if irq_src=26 corresponds to otg0 then 27->otg1, 28->otg2...

Linux DM registers pflip interrupts per number of crtcs.
In fused pipe case crtc numbers can be less than otg id.

e.g. if one pipe out of 3(otg#0-2) is fused adev->mode_info.num_crtc=2
so DM only registers irq_src 26,27.
This is a bug since if pipe#2 remains unfused DM never gets
otg2 pflip interrupt (irq_src=28)
That may results in gfx failure due to pflip timeout.

[How]
Register pflip interrupts per max num of otg instead of num_crtc

Signed-off-by: Roman Li 
---
 drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm.c | 2 +-
 drivers/gpu/drm/amd/display/dc/core/dc.c  | 2 ++
 drivers/gpu/drm/amd/display/dc/dc.h   | 1 +
 3 files changed, 4 insertions(+), 1 deletion(-)

diff --git a/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm.c 
b/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm.c
index 8f53c9f..10ca3fc 100644
--- a/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm.c
+++ b/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm.c
@@ -3646,7 +3646,7 @@ static int dcn10_register_irq_handlers(struct 
amdgpu_device *adev)
 
/* Use GRPH_PFLIP interrupt */
for (i = DCN_1_0__SRCID__HUBP0_FLIP_INTERRUPT;
-   i <= DCN_1_0__SRCID__HUBP0_FLIP_INTERRUPT + 
adev->mode_info.num_crtc - 1;
+   i <= DCN_1_0__SRCID__HUBP0_FLIP_INTERRUPT + 
dc->caps.max_otg_num - 1;
i++) {
r = amdgpu_irq_add_id(adev, SOC15_IH_CLIENTID_DCE, i, 
>pageflip_irq);
if (r) {
diff --git a/drivers/gpu/drm/amd/display/dc/core/dc.c 
b/drivers/gpu/drm/amd/display/dc/core/dc.c
index 1d9404f..70a0b89 100644
--- a/drivers/gpu/drm/amd/display/dc/core/dc.c
+++ b/drivers/gpu/drm/amd/display/dc/core/dc.c
@@ -1220,6 +1220,8 @@ struct dc *dc_create(const struct dc_init_data 
*init_params)
 
dc->caps.max_dp_protocol_version = DP_VERSION_1_4;
 
+   dc->caps.max_otg_num = 
dc->res_pool->res_cap->num_timing_generator;
+
if (dc->res_pool->dmcu != NULL)
dc->versions.dmcu_version = 
dc->res_pool->dmcu->dmcu_version;
}
diff --git a/drivers/gpu/drm/amd/display/dc/dc.h 
b/drivers/gpu/drm/amd/display/dc/dc.h
index 69d264d..af05877 100644
--- a/drivers/gpu/drm/amd/display/dc/dc.h
+++ b/drivers/gpu/drm/amd/display/dc/dc.h
@@ -200,6 +200,7 @@ struct dc_caps {
bool edp_dsc_support;
bool vbios_lttpr_aware;
bool vbios_lttpr_enable;
+   uint32_t max_otg_num;
 };
 
 struct dc_bug_wa {
-- 
2.7.4



[PATCH] drm/amdgpu: Print once if RAS unsupported

2022-02-03 Thread Luben Tuikov
MESA polls for errors every 2-3 seconds. Printing with dev_info() causes
the dmesg log to fill up with the same message, e.g,

[18028.206676] amdgpu :0b:00.0: amdgpu: df doesn't config ras function.

Make it dev_info_once(), as it isn't something correctible during boot, so
printing just once is sufficient. Also sanitize the message.

Cc: Alex Deucher 
Cc: Hawking Zhang 
Cc: John Clements 
Cc: Tao Zhou 
Cc: yipechai 
Fixes: e93ea3d0cf434b ("drm/amdgpu: Modify gfx block to fit for the unified ras 
block data and ops")
Signed-off-by: Luben Tuikov 
---
 drivers/gpu/drm/amd/amdgpu/amdgpu_ras.c | 16 
 1 file changed, 8 insertions(+), 8 deletions(-)

diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_ras.c 
b/drivers/gpu/drm/amd/amdgpu/amdgpu_ras.c
index 9d7c778c1a2d8e..cddbfbb1d6447a 100644
--- a/drivers/gpu/drm/amd/amdgpu/amdgpu_ras.c
+++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_ras.c
@@ -952,8 +952,8 @@ int amdgpu_ras_query_error_status(struct amdgpu_device 
*adev,
} else {
block_obj = amdgpu_ras_get_ras_block(adev, info->head.block, 0);
if (!block_obj || !block_obj->hw_ops)   {
-   dev_info(adev->dev, "%s doesn't config ras function.\n",
-   get_ras_block_str(>head));
+   dev_info_once(adev->dev, "%s doesn't config RAS 
function\n",
+ get_ras_block_str(>head));
return -EINVAL;
}
 
@@ -1028,8 +1028,8 @@ int amdgpu_ras_reset_error_status(struct amdgpu_device 
*adev,
return -EINVAL;
 
if (!block_obj || !block_obj->hw_ops)   {
-   dev_info(adev->dev, "%s doesn't config ras function.\n",
-   ras_block_str(block));
+   dev_info_once(adev->dev, "%s doesn't config RAS function\n",
+ ras_block_str(block));
return -EINVAL;
}
 
@@ -1066,8 +1066,8 @@ int amdgpu_ras_error_inject(struct amdgpu_device *adev,
return -EINVAL;
 
if (!block_obj || !block_obj->hw_ops)   {
-   dev_info(adev->dev, "%s doesn't config ras function.\n",
-   get_ras_block_str(>head));
+   dev_info_once(adev->dev, "%s doesn't config RAS function\n",
+ get_ras_block_str(>head));
return -EINVAL;
}
 
@@ -1717,8 +1717,8 @@ static void amdgpu_ras_error_status_query(struct 
amdgpu_device *adev,
info->head.sub_block_index);
 
if (!block_obj || !block_obj->hw_ops) {
-   dev_info(adev->dev, "%s doesn't config ras function.\n",
-   get_ras_block_str(>head));
+   dev_info_once(adev->dev, "%s doesn't config RAS function\n",
+ get_ras_block_str(>head));
return;
}
 

base-commit: cf33ae90884f254d683436fc2538b99dc4932447
-- 
2.35.0.3.gb23dac905b



[PATCH] drm/amdgpu: Fix wait for RLCG command completion

2022-02-03 Thread Victor Skvortsov
if (!(tmp & flag)) condition will always evaluate to true
when the flag is 0x0 (AMDGPU_RLCG_GC_WRITE). Instead check
that address bits are cleared to determine whether
the command is complete.

Signed-off-by: Victor Skvortsov 
---
 drivers/gpu/drm/amd/amdgpu/amdgpu_virt.c | 2 +-
 drivers/gpu/drm/amd/amdgpu/amdgpu_virt.h | 2 ++
 2 files changed, 3 insertions(+), 1 deletion(-)

diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_virt.c 
b/drivers/gpu/drm/amd/amdgpu/amdgpu_virt.c
index e1288901beb6..a8babe3bccb8 100644
--- a/drivers/gpu/drm/amd/amdgpu/amdgpu_virt.c
+++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_virt.c
@@ -902,7 +902,7 @@ static u32 amdgpu_virt_rlcg_reg_rw(struct amdgpu_device 
*adev, u32 offset, u32 v
 
for (i = 0; i < timeout; i++) {
tmp = readl(scratch_reg1);
-   if (!(tmp & flag))
+   if (!(tmp & AMDGPU_RLCG_SCRATCH1_ADDRESS_MASK))
break;
udelay(10);
}
diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_virt.h 
b/drivers/gpu/drm/amd/amdgpu/amdgpu_virt.h
index 40803aab136f..68f592f0e992 100644
--- a/drivers/gpu/drm/amd/amdgpu/amdgpu_virt.h
+++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_virt.h
@@ -43,6 +43,8 @@
 #define AMDGPU_RLCG_WRONG_OPERATION_TYPE   0x200
 #define AMDGPU_RLCG_REG_NOT_IN_RANGE   0x100
 
+#define AMDGPU_RLCG_SCRATCH1_ADDRESS_MASK  0xF
+
 /* all asic after AI use this offset */
 #define mmRCC_IOV_FUNC_IDENTIFIER 0xDE5
 /* tonga/fiji use this offset */
-- 
2.25.1



[PATCH AUTOSEL 5.15 35/41] drm/amd/display: Correct MPC split policy for DCN301

2022-02-03 Thread Sasha Levin
From: Zhan Liu 

[ Upstream commit ac46d93235074a6c5d280d35771c23fd8620e7d9 ]

[Why]
DCN301 has seamless boot enabled. With MPC split enabled
at the same time, system will hang.

[How]
Revert MPC split policy back to "MPC_SPLIT_AVOID". Since we have
ODM combine enabled on DCN301, pipe split is not necessary here.

Signed-off-by: Zhan Liu 
Reviewed-by: Charlene Liu 
Signed-off-by: Alex Deucher 
Signed-off-by: Sasha Levin 
---
 drivers/gpu/drm/amd/display/dc/dcn301/dcn301_resource.c | 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)

diff --git a/drivers/gpu/drm/amd/display/dc/dcn301/dcn301_resource.c 
b/drivers/gpu/drm/amd/display/dc/dcn301/dcn301_resource.c
index 9e2f18a0c9483..26ebe00a55f67 100644
--- a/drivers/gpu/drm/amd/display/dc/dcn301/dcn301_resource.c
+++ b/drivers/gpu/drm/amd/display/dc/dcn301/dcn301_resource.c
@@ -863,7 +863,7 @@ static const struct dc_debug_options debug_defaults_drv = {
.disable_clock_gate = true,
.disable_pplib_clock_request = true,
.disable_pplib_wm_range = true,
-   .pipe_split_policy = MPC_SPLIT_DYNAMIC,
+   .pipe_split_policy = MPC_SPLIT_AVOID,
.force_single_disp_pipe_split = false,
.disable_dcc = DCC_ENABLE,
.vsr_support = true,
-- 
2.34.1



[PATCH AUTOSEL 5.16 40/52] drm/amdgpu/display: use msleep rather than udelay for long delays

2022-02-03 Thread Sasha Levin
From: Alex Deucher 

[ Upstream commit 98fdcacb45f7cd2092151d6af2e60152811eb79c ]

Some architectures (e.g., ARM) throw an compilation error if the
udelay is too long.  In general udelays of longer than 2000us are
not recommended on any architecture.  Switch to msleep in these
cases.

Reviewed-by: Harry Wentland 
Signed-off-by: Alex Deucher 
Signed-off-by: Sasha Levin 
---
 drivers/gpu/drm/amd/display/dc/core/dc_link_dp.c | 4 ++--
 1 file changed, 2 insertions(+), 2 deletions(-)

diff --git a/drivers/gpu/drm/amd/display/dc/core/dc_link_dp.c 
b/drivers/gpu/drm/amd/display/dc/core/dc_link_dp.c
index 01ac1a64c78b9..d1b47c0d7791a 100644
--- a/drivers/gpu/drm/amd/display/dc/core/dc_link_dp.c
+++ b/drivers/gpu/drm/amd/display/dc/core/dc_link_dp.c
@@ -6038,7 +6038,7 @@ bool dpcd_write_128b_132b_sst_payload_allocation_table(
}
}
retries++;
-   udelay(5000);
+   msleep(5);
}
 
if (!result && retries == max_retries) {
@@ -6090,7 +6090,7 @@ bool dpcd_poll_for_allocation_change_trigger(struct 
dc_link *link)
break;
}
 
-   udelay(5000);
+   msleep(5);
}
 
if (result == ACT_FAILED) {
-- 
2.34.1



[PATCH AUTOSEL 5.16 39/52] drm/amdgpu/display: adjust msleep limit in dp_wait_for_training_aux_rd_interval

2022-02-03 Thread Sasha Levin
From: Alex Deucher 

[ Upstream commit dc919d670c6fd1ac81ebf31625cd19579f7b3d4c ]

Some architectures (e.g., ARM) have relatively low udelay limits.
On most architectures, anything longer than 2000us is not recommended.
Change the check to align with other similar checks in DC.

Reviewed-by: Harry Wentland 
Signed-off-by: Alex Deucher 
Signed-off-by: Sasha Levin 
---
 drivers/gpu/drm/amd/display/dc/core/dc_link_dp.c | 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)

diff --git a/drivers/gpu/drm/amd/display/dc/core/dc_link_dp.c 
b/drivers/gpu/drm/amd/display/dc/core/dc_link_dp.c
index 13bc69d6b6791..01ac1a64c78b9 100644
--- a/drivers/gpu/drm/amd/display/dc/core/dc_link_dp.c
+++ b/drivers/gpu/drm/amd/display/dc/core/dc_link_dp.c
@@ -201,7 +201,7 @@ void dp_wait_for_training_aux_rd_interval(
uint32_t wait_in_micro_secs)
 {
 #if defined(CONFIG_DRM_AMD_DC_DCN)
-   if (wait_in_micro_secs > 16000)
+   if (wait_in_micro_secs > 1000)
msleep(wait_in_micro_secs/1000);
else
udelay(wait_in_micro_secs);
-- 
2.34.1



[PATCH AUTOSEL 5.16 38/52] drm/amd/display: Correct MPC split policy for DCN301

2022-02-03 Thread Sasha Levin
From: Zhan Liu 

[ Upstream commit ac46d93235074a6c5d280d35771c23fd8620e7d9 ]

[Why]
DCN301 has seamless boot enabled. With MPC split enabled
at the same time, system will hang.

[How]
Revert MPC split policy back to "MPC_SPLIT_AVOID". Since we have
ODM combine enabled on DCN301, pipe split is not necessary here.

Signed-off-by: Zhan Liu 
Reviewed-by: Charlene Liu 
Signed-off-by: Alex Deucher 
Signed-off-by: Sasha Levin 
---
 drivers/gpu/drm/amd/display/dc/dcn301/dcn301_resource.c | 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)

diff --git a/drivers/gpu/drm/amd/display/dc/dcn301/dcn301_resource.c 
b/drivers/gpu/drm/amd/display/dc/dcn301/dcn301_resource.c
index e472b729d8690..8af80bc05b364 100644
--- a/drivers/gpu/drm/amd/display/dc/dcn301/dcn301_resource.c
+++ b/drivers/gpu/drm/amd/display/dc/dcn301/dcn301_resource.c
@@ -686,7 +686,7 @@ static const struct dc_debug_options debug_defaults_drv = {
.disable_clock_gate = true,
.disable_pplib_clock_request = true,
.disable_pplib_wm_range = true,
-   .pipe_split_policy = MPC_SPLIT_DYNAMIC,
+   .pipe_split_policy = MPC_SPLIT_AVOID,
.force_single_disp_pipe_split = false,
.disable_dcc = DCC_ENABLE,
.vsr_support = true,
-- 
2.34.1



Re: [PATCH] drm/amdgpu: drop experimental flag on aldebaran

2022-02-03 Thread Felix Kuehling

Am 2022-02-03 um 14:09 schrieb Alex Deucher:

These have been at production level for a while. Drop
the flag.

Signed-off-by: Alex Deucher 


Reviewed-by: Felix Kuehling 



---
  drivers/gpu/drm/amd/amdgpu/amdgpu_drv.c | 8 
  1 file changed, 4 insertions(+), 4 deletions(-)

diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_drv.c 
b/drivers/gpu/drm/amd/amdgpu/amdgpu_drv.c
index 16a47894579d..d7fff876ad13 100644
--- a/drivers/gpu/drm/amd/amdgpu/amdgpu_drv.c
+++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_drv.c
@@ -1916,10 +1916,10 @@ static const struct pci_device_id pciidlist[] = {
{0x1002, 0x73FF, PCI_ANY_ID, PCI_ANY_ID, 0, 0, CHIP_DIMGREY_CAVEFISH},
  
  	/* Aldebaran */

-   {0x1002, 0x7408, PCI_ANY_ID, PCI_ANY_ID, 0, 0, 
CHIP_ALDEBARAN|AMD_EXP_HW_SUPPORT},
-   {0x1002, 0x740C, PCI_ANY_ID, PCI_ANY_ID, 0, 0, 
CHIP_ALDEBARAN|AMD_EXP_HW_SUPPORT},
-   {0x1002, 0x740F, PCI_ANY_ID, PCI_ANY_ID, 0, 0, 
CHIP_ALDEBARAN|AMD_EXP_HW_SUPPORT},
-   {0x1002, 0x7410, PCI_ANY_ID, PCI_ANY_ID, 0, 0, 
CHIP_ALDEBARAN|AMD_EXP_HW_SUPPORT},
+   {0x1002, 0x7408, PCI_ANY_ID, PCI_ANY_ID, 0, 0, CHIP_ALDEBARAN},
+   {0x1002, 0x740C, PCI_ANY_ID, PCI_ANY_ID, 0, 0, CHIP_ALDEBARAN},
+   {0x1002, 0x740F, PCI_ANY_ID, PCI_ANY_ID, 0, 0, CHIP_ALDEBARAN},
+   {0x1002, 0x7410, PCI_ANY_ID, PCI_ANY_ID, 0, 0, CHIP_ALDEBARAN},
  
  	/* CYAN_SKILLFISH */

{0x1002, 0x13FE, PCI_ANY_ID, PCI_ANY_ID, 0, 0, 
CHIP_CYAN_SKILLFISH|AMD_IS_APU},


[PATCH] drm/amdgpu: drop experimental flag on aldebaran

2022-02-03 Thread Alex Deucher
These have been at production level for a while. Drop
the flag.

Signed-off-by: Alex Deucher 
---
 drivers/gpu/drm/amd/amdgpu/amdgpu_drv.c | 8 
 1 file changed, 4 insertions(+), 4 deletions(-)

diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_drv.c 
b/drivers/gpu/drm/amd/amdgpu/amdgpu_drv.c
index 16a47894579d..d7fff876ad13 100644
--- a/drivers/gpu/drm/amd/amdgpu/amdgpu_drv.c
+++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_drv.c
@@ -1916,10 +1916,10 @@ static const struct pci_device_id pciidlist[] = {
{0x1002, 0x73FF, PCI_ANY_ID, PCI_ANY_ID, 0, 0, CHIP_DIMGREY_CAVEFISH},
 
/* Aldebaran */
-   {0x1002, 0x7408, PCI_ANY_ID, PCI_ANY_ID, 0, 0, 
CHIP_ALDEBARAN|AMD_EXP_HW_SUPPORT},
-   {0x1002, 0x740C, PCI_ANY_ID, PCI_ANY_ID, 0, 0, 
CHIP_ALDEBARAN|AMD_EXP_HW_SUPPORT},
-   {0x1002, 0x740F, PCI_ANY_ID, PCI_ANY_ID, 0, 0, 
CHIP_ALDEBARAN|AMD_EXP_HW_SUPPORT},
-   {0x1002, 0x7410, PCI_ANY_ID, PCI_ANY_ID, 0, 0, 
CHIP_ALDEBARAN|AMD_EXP_HW_SUPPORT},
+   {0x1002, 0x7408, PCI_ANY_ID, PCI_ANY_ID, 0, 0, CHIP_ALDEBARAN},
+   {0x1002, 0x740C, PCI_ANY_ID, PCI_ANY_ID, 0, 0, CHIP_ALDEBARAN},
+   {0x1002, 0x740F, PCI_ANY_ID, PCI_ANY_ID, 0, 0, CHIP_ALDEBARAN},
+   {0x1002, 0x7410, PCI_ANY_ID, PCI_ANY_ID, 0, 0, CHIP_ALDEBARAN},
 
/* CYAN_SKILLFISH */
{0x1002, 0x13FE, PCI_ANY_ID, PCI_ANY_ID, 0, 0, 
CHIP_CYAN_SKILLFISH|AMD_IS_APU},
-- 
2.34.1



[PATCH 3/3] drm/amdgpu: move dpcs_3_0_3 headers from dcn to dpcs

2022-02-03 Thread Alex Deucher
To align with other headers.

Signed-off-by: Alex Deucher 
---
 drivers/gpu/drm/amd/display/dc/dcn303/dcn303_resource.c   | 4 ++--
 .../amd/include/asic_reg/{dcn => dpcs}/dpcs_3_0_3_offset.h| 0
 .../amd/include/asic_reg/{dcn => dpcs}/dpcs_3_0_3_sh_mask.h   | 0
 3 files changed, 2 insertions(+), 2 deletions(-)
 rename drivers/gpu/drm/amd/include/asic_reg/{dcn => dpcs}/dpcs_3_0_3_offset.h 
(100%)
 rename drivers/gpu/drm/amd/include/asic_reg/{dcn => dpcs}/dpcs_3_0_3_sh_mask.h 
(100%)

diff --git a/drivers/gpu/drm/amd/display/dc/dcn303/dcn303_resource.c 
b/drivers/gpu/drm/amd/display/dc/dcn303/dcn303_resource.c
index 2de687f64cf6..36649716e991 100644
--- a/drivers/gpu/drm/amd/display/dc/dcn303/dcn303_resource.c
+++ b/drivers/gpu/drm/amd/display/dc/dcn303/dcn303_resource.c
@@ -48,8 +48,8 @@
 #include "sienna_cichlid_ip_offset.h"
 #include "dcn/dcn_3_0_3_offset.h"
 #include "dcn/dcn_3_0_3_sh_mask.h"
-#include "dcn/dpcs_3_0_3_offset.h"
-#include "dcn/dpcs_3_0_3_sh_mask.h"
+#include "dpcs/dpcs_3_0_3_offset.h"
+#include "dpcs/dpcs_3_0_3_sh_mask.h"
 #include "nbio/nbio_2_3_offset.h"
 
 #define DC_LOGGER_INIT(logger)
diff --git a/drivers/gpu/drm/amd/include/asic_reg/dcn/dpcs_3_0_3_offset.h 
b/drivers/gpu/drm/amd/include/asic_reg/dpcs/dpcs_3_0_3_offset.h
similarity index 100%
rename from drivers/gpu/drm/amd/include/asic_reg/dcn/dpcs_3_0_3_offset.h
rename to drivers/gpu/drm/amd/include/asic_reg/dpcs/dpcs_3_0_3_offset.h
diff --git a/drivers/gpu/drm/amd/include/asic_reg/dcn/dpcs_3_0_3_sh_mask.h 
b/drivers/gpu/drm/amd/include/asic_reg/dpcs/dpcs_3_0_3_sh_mask.h
similarity index 100%
rename from drivers/gpu/drm/amd/include/asic_reg/dcn/dpcs_3_0_3_sh_mask.h
rename to drivers/gpu/drm/amd/include/asic_reg/dpcs/dpcs_3_0_3_sh_mask.h
-- 
2.34.1



[PATCH 2/3] drm/amdgpu: move dpcs_3_0_0 headers from dcn to dpcs

2022-02-03 Thread Alex Deucher
To align with other headers.

Signed-off-by: Alex Deucher 
---
 drivers/gpu/drm/amd/display/dc/clk_mgr/dcn30/dcn30_clk_mgr.c  | 4 ++--
 drivers/gpu/drm/amd/display/dc/dcn30/dcn30_resource.c | 4 ++--
 drivers/gpu/drm/amd/display/dc/dcn301/dcn301_resource.c   | 4 ++--
 drivers/gpu/drm/amd/display/dc/dcn302/dcn302_resource.c   | 4 ++--
 drivers/gpu/drm/amd/display/dc/gpio/dcn30/hw_factory_dcn30.c  | 4 ++--
 .../gpu/drm/amd/display/dc/gpio/dcn30/hw_translate_dcn30.c| 4 ++--
 drivers/gpu/drm/amd/display/dc/irq/dcn30/irq_service_dcn30.c  | 4 ++--
 .../amd/include/asic_reg/{dcn => dpcs}/dpcs_3_0_0_offset.h| 0
 .../amd/include/asic_reg/{dcn => dpcs}/dpcs_3_0_0_sh_mask.h   | 0
 9 files changed, 14 insertions(+), 14 deletions(-)
 rename drivers/gpu/drm/amd/include/asic_reg/{dcn => dpcs}/dpcs_3_0_0_offset.h 
(100%)
 rename drivers/gpu/drm/amd/include/asic_reg/{dcn => dpcs}/dpcs_3_0_0_sh_mask.h 
(100%)

diff --git a/drivers/gpu/drm/amd/display/dc/clk_mgr/dcn30/dcn30_clk_mgr.c 
b/drivers/gpu/drm/amd/display/dc/clk_mgr/dcn30/dcn30_clk_mgr.c
index 0602bde78e6c..589131d415fd 100644
--- a/drivers/gpu/drm/amd/display/dc/clk_mgr/dcn30/dcn30_clk_mgr.c
+++ b/drivers/gpu/drm/amd/display/dc/clk_mgr/dcn30/dcn30_clk_mgr.c
@@ -42,8 +42,8 @@
 
 #include "nbio/nbio_7_4_offset.h"
 
-#include "dcn/dpcs_3_0_0_offset.h"
-#include "dcn/dpcs_3_0_0_sh_mask.h"
+#include "dpcs/dpcs_3_0_0_offset.h"
+#include "dpcs/dpcs_3_0_0_sh_mask.h"
 
 #include "mmhub/mmhub_2_0_0_offset.h"
 #include "mmhub/mmhub_2_0_0_sh_mask.h"
diff --git a/drivers/gpu/drm/amd/display/dc/dcn30/dcn30_resource.c 
b/drivers/gpu/drm/amd/display/dc/dcn30/dcn30_resource.c
index 8ca26383b568..f10f7a0ca02a 100644
--- a/drivers/gpu/drm/amd/display/dc/dcn30/dcn30_resource.c
+++ b/drivers/gpu/drm/amd/display/dc/dcn30/dcn30_resource.c
@@ -72,8 +72,8 @@
 
 #include "nbio/nbio_7_4_offset.h"
 
-#include "dcn/dpcs_3_0_0_offset.h"
-#include "dcn/dpcs_3_0_0_sh_mask.h"
+#include "dpcs/dpcs_3_0_0_offset.h"
+#include "dpcs/dpcs_3_0_0_sh_mask.h"
 
 #include "mmhub/mmhub_2_0_0_offset.h"
 #include "mmhub/mmhub_2_0_0_sh_mask.h"
diff --git a/drivers/gpu/drm/amd/display/dc/dcn301/dcn301_resource.c 
b/drivers/gpu/drm/amd/display/dc/dcn301/dcn301_resource.c
index 5d9637b07429..4daf8931aa7c 100644
--- a/drivers/gpu/drm/amd/display/dc/dcn301/dcn301_resource.c
+++ b/drivers/gpu/drm/amd/display/dc/dcn301/dcn301_resource.c
@@ -73,8 +73,8 @@
 
 #include "nbio/nbio_7_2_0_offset.h"
 
-#include "dcn/dpcs_3_0_0_offset.h"
-#include "dcn/dpcs_3_0_0_sh_mask.h"
+#include "dpcs/dpcs_3_0_0_offset.h"
+#include "dpcs/dpcs_3_0_0_sh_mask.h"
 
 #include "reg_helper.h"
 #include "dce/dmub_abm.h"
diff --git a/drivers/gpu/drm/amd/display/dc/dcn302/dcn302_resource.c 
b/drivers/gpu/drm/amd/display/dc/dcn302/dcn302_resource.c
index e512ae6d00d4..88318e8ffca8 100644
--- a/drivers/gpu/drm/amd/display/dc/dcn302/dcn302_resource.c
+++ b/drivers/gpu/drm/amd/display/dc/dcn302/dcn302_resource.c
@@ -66,8 +66,8 @@
 #include "dimgrey_cavefish_ip_offset.h"
 #include "dcn/dcn_3_0_2_offset.h"
 #include "dcn/dcn_3_0_2_sh_mask.h"
-#include "dcn/dpcs_3_0_0_offset.h"
-#include "dcn/dpcs_3_0_0_sh_mask.h"
+#include "dpcs/dpcs_3_0_0_offset.h"
+#include "dpcs/dpcs_3_0_0_sh_mask.h"
 #include "nbio/nbio_7_4_offset.h"
 #include "amdgpu_socbb.h"
 
diff --git a/drivers/gpu/drm/amd/display/dc/gpio/dcn30/hw_factory_dcn30.c 
b/drivers/gpu/drm/amd/display/dc/gpio/dcn30/hw_factory_dcn30.c
index 5f6ae3edb755..3b7df1ac26be 100644
--- a/drivers/gpu/drm/amd/display/dc/gpio/dcn30/hw_factory_dcn30.c
+++ b/drivers/gpu/drm/amd/display/dc/gpio/dcn30/hw_factory_dcn30.c
@@ -42,8 +42,8 @@
 
 #include "nbio/nbio_7_4_offset.h"
 
-#include "dcn/dpcs_3_0_0_offset.h"
-#include "dcn/dpcs_3_0_0_sh_mask.h"
+#include "dpcs/dpcs_3_0_0_offset.h"
+#include "dpcs/dpcs_3_0_0_sh_mask.h"
 
 #include "mmhub/mmhub_2_0_0_offset.h"
 #include "mmhub/mmhub_2_0_0_sh_mask.h"
diff --git a/drivers/gpu/drm/amd/display/dc/gpio/dcn30/hw_translate_dcn30.c 
b/drivers/gpu/drm/amd/display/dc/gpio/dcn30/hw_translate_dcn30.c
index 0046219a1cc7..6b6b7c7bd12f 100644
--- a/drivers/gpu/drm/amd/display/dc/gpio/dcn30/hw_translate_dcn30.c
+++ b/drivers/gpu/drm/amd/display/dc/gpio/dcn30/hw_translate_dcn30.c
@@ -40,8 +40,8 @@
 
 #include "nbio/nbio_7_4_offset.h"
 
-#include "dcn/dpcs_3_0_0_offset.h"
-#include "dcn/dpcs_3_0_0_sh_mask.h"
+#include "dpcs/dpcs_3_0_0_offset.h"
+#include "dpcs/dpcs_3_0_0_sh_mask.h"
 
 #include "mmhub/mmhub_2_0_0_offset.h"
 #include "mmhub/mmhub_2_0_0_sh_mask.h"
diff --git a/drivers/gpu/drm/amd/display/dc/irq/dcn30/irq_service_dcn30.c 
b/drivers/gpu/drm/amd/display/dc/irq/dcn30/irq_service_dcn30.c
index 914ce2ce1c2f..0b68c08fac3f 100644
--- a/drivers/gpu/drm/amd/display/dc/irq/dcn30/irq_service_dcn30.c
+++ b/drivers/gpu/drm/amd/display/dc/irq/dcn30/irq_service_dcn30.c
@@ -37,8 +37,8 @@
 
 #include "nbio/nbio_7_4_offset.h"
 
-#include "dcn/dpcs_3_0_0_offset.h"
-#include "dcn/dpcs_3_0_0_sh_mask.h"
+#include "dpcs/dpcs_3_0_0_offset.h"
+#include 

[PATCH 1/3] drm/amdgpu: add missing license to dpcs_3_0_0 headers

2022-02-03 Thread Alex Deucher
MIT.

Signed-off-by: Alex Deucher 
---
 .../gpu/drm/amd/include/asic_reg/dcn/dpcs_3_0_0_offset.h   | 7 +++
 .../gpu/drm/amd/include/asic_reg/dcn/dpcs_3_0_0_sh_mask.h  | 7 +++
 2 files changed, 14 insertions(+)

diff --git a/drivers/gpu/drm/amd/include/asic_reg/dcn/dpcs_3_0_0_offset.h 
b/drivers/gpu/drm/amd/include/asic_reg/dcn/dpcs_3_0_0_offset.h
index 67faaf68e9d7..0bb47e06eee8 100644
--- a/drivers/gpu/drm/amd/include/asic_reg/dcn/dpcs_3_0_0_offset.h
+++ b/drivers/gpu/drm/amd/include/asic_reg/dcn/dpcs_3_0_0_offset.h
@@ -1,3 +1,10 @@
+// SPDX-License-Identifier: MIT
+/*
+ * Copyright (C) 2020 Advanced Micro Devices, Inc.
+ *
+ * Authors: AMD
+ */
+
 #ifndef _dpcs_3_0_0_OFFSET_HEADER
 #define _dpcs_3_0_0_OFFSET_HEADER
 
diff --git a/drivers/gpu/drm/amd/include/asic_reg/dcn/dpcs_3_0_0_sh_mask.h 
b/drivers/gpu/drm/amd/include/asic_reg/dcn/dpcs_3_0_0_sh_mask.h
index b4ef50a72868..23fa1121a967 100644
--- a/drivers/gpu/drm/amd/include/asic_reg/dcn/dpcs_3_0_0_sh_mask.h
+++ b/drivers/gpu/drm/amd/include/asic_reg/dcn/dpcs_3_0_0_sh_mask.h
@@ -1,3 +1,10 @@
+// SPDX-License-Identifier: MIT
+/*
+ * Copyright (C) 2020 Advanced Micro Devices, Inc.
+ *
+ * Authors: AMD
+ */
+
 #ifndef _dpcs_3_0_0_SH_MASK_HEADER
 #define _dpcs_3_0_0_SH_MASK_HEADER
 
-- 
2.34.1



[PATCH] drm/amdgpu/display: change pipe policy for DCN 2.0

2022-02-03 Thread Alex Deucher
Fixes hangs on driver load on DCN 2.0 parts.

Bug: https://bugzilla.kernel.org/show_bug.cgi?id=215511
Fixes: ee2698cf79cc ("drm/amd/display: Changed pipe split policy to allow for 
multi-display pipe split")
Signed-off-by: Alex Deucher 
---
 drivers/gpu/drm/amd/display/dc/dcn20/dcn20_resource.c | 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)

diff --git a/drivers/gpu/drm/amd/display/dc/dcn20/dcn20_resource.c 
b/drivers/gpu/drm/amd/display/dc/dcn20/dcn20_resource.c
index fcf388b509db..d9b3f449bf9b 100644
--- a/drivers/gpu/drm/amd/display/dc/dcn20/dcn20_resource.c
+++ b/drivers/gpu/drm/amd/display/dc/dcn20/dcn20_resource.c
@@ -1069,7 +1069,7 @@ static const struct dc_debug_options debug_defaults_drv = 
{
.timing_trace = false,
.clock_trace = true,
.disable_pplib_clock_request = true,
-   .pipe_split_policy = MPC_SPLIT_DYNAMIC,
+   .pipe_split_policy = MPC_SPLIT_AVOID_MULT_DISP,
.force_single_disp_pipe_split = false,
.disable_dcc = DCC_ENABLE,
.vsr_support = true,
-- 
2.34.1



Re: [PATCH v2] drm/amd/display: Handle removed connector in early_unregister

2022-02-03 Thread Harry Wentland
On 2022-02-03 13:17, Fangzhi Zuo wrote:
> From: Wayne Lin 
> 
> This patch lived in our internal branch since August
> but somehow missed the merge to upstream.
> 
> Original patch description:
> 
> [Why]
> commit "drm/amd/display: turn DPMS off on connector unplug" and
> commit "drm/amd/display: Clear dc remote sinks on MST disconnect"
> were trying to resolve the resource problem when we connectors get
> disconnected under MST scenarios. However, these patches don't
> really clean up all remote sinks. Nor turns DPMS off on all affected
> streams. Also, these can't handle disconnected connectors reported by CSN.
> 
> [How]
> - Revise commit "drm/amd/display: turn DPMS off on connector unplug"
> a bit to handle none mst case only.
> - Revert commit "drm/amd/display: Clear dc remote sinks on MST disconnect"
> - Revise a bit the logic in above patches and change to turn DPMS
> off/clear dc remote sink within amdgpu_dm_mst_connector_early_unregister().
> Since drm will call .early_unregister for all disconnected connectors,
> we can ensure to also handle disconnected connectors reported by CSN.
> 
> Signed-off-by: Wayne Lin 
> Signed-off-by: Fangzhi Zuo 

Reviewed-by: Harry Wentland 

Harry

> ---
>  .../gpu/drm/amd/display/amdgpu_dm/amdgpu_dm.c |  7 
>  .../display/amdgpu_dm/amdgpu_dm_mst_types.c   | 41 +--
>  .../gpu/drm/amd/display/dc/core/dc_stream.c   | 12 ++
>  drivers/gpu/drm/amd/display/dc/dc_stream.h|  1 +
>  4 files changed, 58 insertions(+), 3 deletions(-)
> 
> diff --git a/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm.c 
> b/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm.c
> index f5941e59e5ad..529b3ddaa10b 100644
> --- a/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm.c
> +++ b/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm.c
> @@ -3034,6 +3034,7 @@ static void handle_hpd_irq_helper(struct 
> amdgpu_dm_connector *aconnector)
>   struct drm_connector *connector = >base;
>   struct drm_device *dev = connector->dev;
>   enum dc_connection_type new_connection_type = dc_connection_none;
> + enum dc_connection_type old_connection_type = aconnector->dc_link->type;
>   struct amdgpu_device *adev = drm_to_adev(dev);
>   struct dm_connector_state *dm_con_state = 
> to_dm_connector_state(connector->state);
>   struct dm_crtc_state *dm_crtc_state = NULL;
> @@ -3074,7 +3075,13 @@ static void handle_hpd_irq_helper(struct 
> amdgpu_dm_connector *aconnector)
>   drm_kms_helper_hotplug_event(dev);
>  
>   } else if (dc_link_detect(aconnector->dc_link, DETECT_REASON_HPD)) {
> + /**
> +  * MST cases are handled within .early_unregister where we
> +  * can handle disconnected conectors reported by long HPD
> +  * and CSN.
> +  */
>   if (new_connection_type == dc_connection_none &&
> + old_connection_type != dc_connection_mst_branch &&
>   aconnector->dc_link->type == dc_connection_none &&
>   dm_crtc_state)
>   dm_set_dpms_off(aconnector->dc_link, dm_crtc_state);
> diff --git a/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm_mst_types.c 
> b/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm_mst_types.c
> index 8e97d21bdf5c..7cd1f1f57d6e 100644
> --- a/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm_mst_types.c
> +++ b/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm_mst_types.c
> @@ -139,11 +139,46 @@ amdgpu_dm_mst_connector_late_register(struct 
> drm_connector *connector)
>  static void
>  amdgpu_dm_mst_connector_early_unregister(struct drm_connector *connector)
>  {
> - struct amdgpu_dm_connector *amdgpu_dm_connector =
> - to_amdgpu_dm_connector(connector);
> - struct drm_dp_mst_port *port = amdgpu_dm_connector->port;
> + struct amdgpu_dm_connector *aconnector =
> +to_amdgpu_dm_connector(connector);
> + struct drm_dp_mst_port *port = aconnector->port;
> + struct dc_stream_update stream_update;
> + struct dc_stream_state *stream_state;
> + struct drm_device *ddev = aconnector->base.dev;
> + struct amdgpu_device *adev = drm_to_adev(ddev);
> + struct dc_link *dc_link = aconnector->dc_link;
> + struct dc_sink *dc_sink = aconnector->dc_sink;
> + bool dpms_off = true;
>  
>   drm_dp_mst_connector_early_unregister(connector, port);
> +
> + ASSERT(dc_link);
> +
> + if (dc_sink) {
> + mutex_lock(>mode_config.mutex);
> + mutex_lock(>dm.dc_lock);
> +
> + memset(_update, 0, sizeof(stream_update));
> + stream_update.dpms_off = _off;
> +
> + /*set stream dpms_off*/
> + stream_state = dc_stream_get_stream_by_sink(dc_sink);
> + if (stream_state != NULL) {
> + stream_update.stream = stream_state;
> + 
> dc_commit_updates_for_stream(stream_state->ctx->dc, NULL, 0,
> +  

[PATCH v2] drm/amd/display: Handle removed connector in early_unregister

2022-02-03 Thread Fangzhi Zuo
From: Wayne Lin 

This patch lived in our internal branch since August
but somehow missed the merge to upstream.

Original patch description:

[Why]
commit "drm/amd/display: turn DPMS off on connector unplug" and
commit "drm/amd/display: Clear dc remote sinks on MST disconnect"
were trying to resolve the resource problem when we connectors get
disconnected under MST scenarios. However, these patches don't
really clean up all remote sinks. Nor turns DPMS off on all affected
streams. Also, these can't handle disconnected connectors reported by CSN.

[How]
- Revise commit "drm/amd/display: turn DPMS off on connector unplug"
a bit to handle none mst case only.
- Revert commit "drm/amd/display: Clear dc remote sinks on MST disconnect"
- Revise a bit the logic in above patches and change to turn DPMS
off/clear dc remote sink within amdgpu_dm_mst_connector_early_unregister().
Since drm will call .early_unregister for all disconnected connectors,
we can ensure to also handle disconnected connectors reported by CSN.

Signed-off-by: Wayne Lin 
Signed-off-by: Fangzhi Zuo 
---
 .../gpu/drm/amd/display/amdgpu_dm/amdgpu_dm.c |  7 
 .../display/amdgpu_dm/amdgpu_dm_mst_types.c   | 41 +--
 .../gpu/drm/amd/display/dc/core/dc_stream.c   | 12 ++
 drivers/gpu/drm/amd/display/dc/dc_stream.h|  1 +
 4 files changed, 58 insertions(+), 3 deletions(-)

diff --git a/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm.c 
b/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm.c
index f5941e59e5ad..529b3ddaa10b 100644
--- a/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm.c
+++ b/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm.c
@@ -3034,6 +3034,7 @@ static void handle_hpd_irq_helper(struct 
amdgpu_dm_connector *aconnector)
struct drm_connector *connector = >base;
struct drm_device *dev = connector->dev;
enum dc_connection_type new_connection_type = dc_connection_none;
+   enum dc_connection_type old_connection_type = aconnector->dc_link->type;
struct amdgpu_device *adev = drm_to_adev(dev);
struct dm_connector_state *dm_con_state = 
to_dm_connector_state(connector->state);
struct dm_crtc_state *dm_crtc_state = NULL;
@@ -3074,7 +3075,13 @@ static void handle_hpd_irq_helper(struct 
amdgpu_dm_connector *aconnector)
drm_kms_helper_hotplug_event(dev);
 
} else if (dc_link_detect(aconnector->dc_link, DETECT_REASON_HPD)) {
+   /**
+* MST cases are handled within .early_unregister where we
+* can handle disconnected conectors reported by long HPD
+* and CSN.
+*/
if (new_connection_type == dc_connection_none &&
+   old_connection_type != dc_connection_mst_branch &&
aconnector->dc_link->type == dc_connection_none &&
dm_crtc_state)
dm_set_dpms_off(aconnector->dc_link, dm_crtc_state);
diff --git a/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm_mst_types.c 
b/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm_mst_types.c
index 8e97d21bdf5c..7cd1f1f57d6e 100644
--- a/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm_mst_types.c
+++ b/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm_mst_types.c
@@ -139,11 +139,46 @@ amdgpu_dm_mst_connector_late_register(struct 
drm_connector *connector)
 static void
 amdgpu_dm_mst_connector_early_unregister(struct drm_connector *connector)
 {
-   struct amdgpu_dm_connector *amdgpu_dm_connector =
-   to_amdgpu_dm_connector(connector);
-   struct drm_dp_mst_port *port = amdgpu_dm_connector->port;
+   struct amdgpu_dm_connector *aconnector =
+to_amdgpu_dm_connector(connector);
+   struct drm_dp_mst_port *port = aconnector->port;
+   struct dc_stream_update stream_update;
+   struct dc_stream_state *stream_state;
+   struct drm_device *ddev = aconnector->base.dev;
+   struct amdgpu_device *adev = drm_to_adev(ddev);
+   struct dc_link *dc_link = aconnector->dc_link;
+   struct dc_sink *dc_sink = aconnector->dc_sink;
+   bool dpms_off = true;
 
drm_dp_mst_connector_early_unregister(connector, port);
+
+   ASSERT(dc_link);
+
+   if (dc_sink) {
+   mutex_lock(>mode_config.mutex);
+   mutex_lock(>dm.dc_lock);
+
+   memset(_update, 0, sizeof(stream_update));
+   stream_update.dpms_off = _off;
+
+   /*set stream dpms_off*/
+   stream_state = dc_stream_get_stream_by_sink(dc_sink);
+   if (stream_state != NULL) {
+   stream_update.stream = stream_state;
+   
dc_commit_updates_for_stream(stream_state->ctx->dc, NULL, 0,
+   
stream_state, _update,
+   
stream_state->ctx->dc->current_state);

[PATCH] drm/amd/display: Handle removed connector in early_unregister

2022-02-03 Thread Fangzhi Zuo
From: Wayne Lin 

This patch lived in our internal branch since August
but somehow missed the merge to upstream.

Original Patch:
(dc: Handle removed connector in early_unregister)

Signed-off-by: Wayne Lin 
Signed-off-by: Fangzhi Zuo 
---
 .../gpu/drm/amd/display/amdgpu_dm/amdgpu_dm.c |  7 
 .../display/amdgpu_dm/amdgpu_dm_mst_types.c   | 41 +--
 .../gpu/drm/amd/display/dc/core/dc_stream.c   | 12 ++
 drivers/gpu/drm/amd/display/dc/dc_stream.h|  1 +
 4 files changed, 58 insertions(+), 3 deletions(-)

diff --git a/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm.c 
b/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm.c
index f5941e59e5ad..529b3ddaa10b 100644
--- a/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm.c
+++ b/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm.c
@@ -3034,6 +3034,7 @@ static void handle_hpd_irq_helper(struct 
amdgpu_dm_connector *aconnector)
struct drm_connector *connector = >base;
struct drm_device *dev = connector->dev;
enum dc_connection_type new_connection_type = dc_connection_none;
+   enum dc_connection_type old_connection_type = aconnector->dc_link->type;
struct amdgpu_device *adev = drm_to_adev(dev);
struct dm_connector_state *dm_con_state = 
to_dm_connector_state(connector->state);
struct dm_crtc_state *dm_crtc_state = NULL;
@@ -3074,7 +3075,13 @@ static void handle_hpd_irq_helper(struct 
amdgpu_dm_connector *aconnector)
drm_kms_helper_hotplug_event(dev);
 
} else if (dc_link_detect(aconnector->dc_link, DETECT_REASON_HPD)) {
+   /**
+* MST cases are handled within .early_unregister where we
+* can handle disconnected conectors reported by long HPD
+* and CSN.
+*/
if (new_connection_type == dc_connection_none &&
+   old_connection_type != dc_connection_mst_branch &&
aconnector->dc_link->type == dc_connection_none &&
dm_crtc_state)
dm_set_dpms_off(aconnector->dc_link, dm_crtc_state);
diff --git a/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm_mst_types.c 
b/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm_mst_types.c
index 8e97d21bdf5c..7cd1f1f57d6e 100644
--- a/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm_mst_types.c
+++ b/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm_mst_types.c
@@ -139,11 +139,46 @@ amdgpu_dm_mst_connector_late_register(struct 
drm_connector *connector)
 static void
 amdgpu_dm_mst_connector_early_unregister(struct drm_connector *connector)
 {
-   struct amdgpu_dm_connector *amdgpu_dm_connector =
-   to_amdgpu_dm_connector(connector);
-   struct drm_dp_mst_port *port = amdgpu_dm_connector->port;
+   struct amdgpu_dm_connector *aconnector =
+to_amdgpu_dm_connector(connector);
+   struct drm_dp_mst_port *port = aconnector->port;
+   struct dc_stream_update stream_update;
+   struct dc_stream_state *stream_state;
+   struct drm_device *ddev = aconnector->base.dev;
+   struct amdgpu_device *adev = drm_to_adev(ddev);
+   struct dc_link *dc_link = aconnector->dc_link;
+   struct dc_sink *dc_sink = aconnector->dc_sink;
+   bool dpms_off = true;
 
drm_dp_mst_connector_early_unregister(connector, port);
+
+   ASSERT(dc_link);
+
+   if (dc_sink) {
+   mutex_lock(>mode_config.mutex);
+   mutex_lock(>dm.dc_lock);
+
+   memset(_update, 0, sizeof(stream_update));
+   stream_update.dpms_off = _off;
+
+   /*set stream dpms_off*/
+   stream_state = dc_stream_get_stream_by_sink(dc_sink);
+   if (stream_state != NULL) {
+   stream_update.stream = stream_state;
+   
dc_commit_updates_for_stream(stream_state->ctx->dc, NULL, 0,
+   
stream_state, _update,
+   
stream_state->ctx->dc->current_state);
+   }
+
+   /*clear the remote sink of the link*/
+   dc_link_remove_remote_sink(dc_link, dc_sink);
+   dc_sink_release(dc_sink);
+   aconnector->dc_sink = NULL;
+
+   mutex_unlock(>dm.dc_lock);
+   mutex_unlock(>mode_config.mutex);
+   }
+
 }
 
 static const struct drm_connector_funcs dm_dp_mst_connector_funcs = {
diff --git a/drivers/gpu/drm/amd/display/dc/core/dc_stream.c 
b/drivers/gpu/drm/amd/display/dc/core/dc_stream.c
index 57cf4cb82370..a77c90c14e85 100644
--- a/drivers/gpu/drm/amd/display/dc/core/dc_stream.c
+++ b/drivers/gpu/drm/amd/display/dc/core/dc_stream.c
@@ -739,3 +739,15 @@ void dc_stream_log(const struct dc *dc, const struct 
dc_stream_state *stream)
stream->link->link_index);
 }
 
+struct 

Re: [PATCH -next] drm/amdkfd: Fix resource_size.cocci warning

2022-02-03 Thread Felix Kuehling



Am 2022-02-03 um 00:04 schrieb Yang Li:

Use resource_size function on resource object instead of explicit
computation.

Eliminate the following coccicheck warning:
./drivers/gpu/drm/amd/amdkfd/kfd_migrate.c:978:11-14: ERROR: Missing
resource_size with res

Reported-by: Abaci Robot 
Signed-off-by: Yang Li 



This patch was already applied in September: 
https://git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git/commit/?id=0de5472a01804f43b7c8ddb1132bbfeb8b68674f


Which branch is this for?

Regards,
  Felix




---
  drivers/gpu/drm/amd/amdkfd/kfd_migrate.c | 2 +-
  1 file changed, 1 insertion(+), 1 deletion(-)

diff --git a/drivers/gpu/drm/amd/amdkfd/kfd_migrate.c 
b/drivers/gpu/drm/amd/amdkfd/kfd_migrate.c
index 8430f6475723..d4287a39be56 100644
--- a/drivers/gpu/drm/amd/amdkfd/kfd_migrate.c
+++ b/drivers/gpu/drm/amd/amdkfd/kfd_migrate.c
@@ -975,7 +975,7 @@ int svm_migrate_init(struct amdgpu_device *adev)
pgmap->type = 0;
if (pgmap->type == MEMORY_DEVICE_PRIVATE)
devm_release_mem_region(adev->dev, res->start,
-   res->end - res->start + 1);
+   resource_size(res));
return PTR_ERR(r);
}
  


Re: [PATCH] drm/amd/display: Handle removed connector in early_unregister

2022-02-03 Thread Harry Wentland



On 2022-02-02 13:49, Fangzhi Zuo wrote:
> From: Wayne Lin 
> 
> [Why]
> commit "drm/amd/display: turn DPMS off on connector unplug" and
> commit "drm/amd/display: Clear dc remote sinks on MST disconnect"
> were trying to resolve the resource problem when we connectors get
> disconnected under MST scenarios. However, these patches don't
> really clean up all remote sinks. Nor turns DPMS off on all affected
> streams. Also, these can't handle disconnected connectors reported by CSN.
> 
> [How]
> - Revise commit "drm/amd/display: turn DPMS off on connector unplug"
> a bit to handle none mst case only.
> - Revert commit "drm/amd/display: Clear dc remote sinks on MST disconnect"

I don't see this revert as part of this commit.

Generally, if we revert code it should be done in a single revert commit
that is generated via "git revert".

Harry

> - Revise a bit the logic in above patches and change to turn DPMS
> off/clear dc remote sink within amdgpu_dm_mst_connector_early_unregister().
> Since drm will call .early_unregister for all disconnected connectors,
> we can ensure to also handle disconnected connectors reported by CSN.
> 
> Signed-off-by: Wayne Lin 
> Signed-off-by: Fangzhi Zuo 
> ---
>  .../gpu/drm/amd/display/amdgpu_dm/amdgpu_dm.c |  7 
>  .../display/amdgpu_dm/amdgpu_dm_mst_types.c   | 41 +--
>  .../gpu/drm/amd/display/dc/core/dc_stream.c   | 12 ++
>  drivers/gpu/drm/amd/display/dc/dc_stream.h|  1 +
>  4 files changed, 58 insertions(+), 3 deletions(-)
> 
> diff --git a/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm.c 
> b/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm.c
> index f5941e59e5ad..529b3ddaa10b 100644
> --- a/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm.c
> +++ b/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm.c
> @@ -3034,6 +3034,7 @@ static void handle_hpd_irq_helper(struct 
> amdgpu_dm_connector *aconnector)
>   struct drm_connector *connector = >base;
>   struct drm_device *dev = connector->dev;
>   enum dc_connection_type new_connection_type = dc_connection_none;
> + enum dc_connection_type old_connection_type = aconnector->dc_link->type;
>   struct amdgpu_device *adev = drm_to_adev(dev);
>   struct dm_connector_state *dm_con_state = 
> to_dm_connector_state(connector->state);
>   struct dm_crtc_state *dm_crtc_state = NULL;
> @@ -3074,7 +3075,13 @@ static void handle_hpd_irq_helper(struct 
> amdgpu_dm_connector *aconnector)
>   drm_kms_helper_hotplug_event(dev);
>  
>   } else if (dc_link_detect(aconnector->dc_link, DETECT_REASON_HPD)) {
> + /**
> +  * MST cases are handled within .early_unregister where we
> +  * can handle disconnected conectors reported by long HPD
> +  * and CSN.
> +  */
>   if (new_connection_type == dc_connection_none &&
> + old_connection_type != dc_connection_mst_branch &&
>   aconnector->dc_link->type == dc_connection_none &&
>   dm_crtc_state)
>   dm_set_dpms_off(aconnector->dc_link, dm_crtc_state);
> diff --git a/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm_mst_types.c 
> b/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm_mst_types.c
> index 8e97d21bdf5c..411b55596b00 100644
> --- a/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm_mst_types.c
> +++ b/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm_mst_types.c
> @@ -139,11 +139,46 @@ amdgpu_dm_mst_connector_late_register(struct 
> drm_connector *connector)
>  static void
>  amdgpu_dm_mst_connector_early_unregister(struct drm_connector *connector)
>  {
> - struct amdgpu_dm_connector *amdgpu_dm_connector =
> - to_amdgpu_dm_connector(connector);
> - struct drm_dp_mst_port *port = amdgpu_dm_connector->port;
> + struct amdgpu_dm_connector *aconnector =
> +to_amdgpu_dm_connector(connector);
> + struct drm_dp_mst_port *port = aconnector->port;
> + struct dc_stream_update stream_update;
> + struct dc_stream_state *stream_state;
> + struct drm_device *ddev = aconnector->base.dev;
> + struct amdgpu_device *adev = drm_to_adev(ddev);
> + struct dc_link *dc_link = aconnector->dc_link;
> + struct dc_sink *dc_sink = aconnector->dc_sink;
> + bool dpms_off = true;
>  
>   drm_dp_mst_connector_early_unregister(connector, port);
> +
> + ASSERT(dc_link);
> +
> + if (dc_sink) {
> + mutex_lock(>mode_config.mutex);
> + mutex_lock(>dm.dc_lock);
> +
> + memset(_update, 0, sizeof(stream_update));
> + stream_update.dpms_off = _off;
> +
> + /*set stream dpms_off*/
> + stream_state = dc_stream_get_stream_by_sink(dc_sink);
> + if (stream_state != NULL) {
> + stream_update.stream = stream_state;
> + 
> 

Re: [PATCH 1/7] drm/selftests: Move i915 buddy selftests into drm

2022-02-03 Thread Christian König

Am 03.02.22 um 14:32 schrieb Arunpravin:

- move i915 buddy selftests into drm selftests folder
- add Makefile and Kconfig support
- add sanitycheck testcase

Prerequisites
- These series of selftests patches are created on top of
   drm buddy series
- Enable kselftests for DRM as a module in .config

Signed-off-by: Arunpravin 


Only skimmed over this, but of hand I haven't seen anything obviously bad.

Feel free to add an Acked-by: Christian König  
to the series.


Regards,
Christian.


---
  drivers/gpu/drm/Kconfig   |  1 +
  drivers/gpu/drm/selftests/Makefile|  3 +-
  .../gpu/drm/selftests/drm_buddy_selftests.h   |  9 
  drivers/gpu/drm/selftests/test-drm_buddy.c| 49 +++
  4 files changed, 61 insertions(+), 1 deletion(-)
  create mode 100644 drivers/gpu/drm/selftests/drm_buddy_selftests.h
  create mode 100644 drivers/gpu/drm/selftests/test-drm_buddy.c

diff --git a/drivers/gpu/drm/Kconfig b/drivers/gpu/drm/Kconfig
index eb5a57ae3c5c..ff856df3f97f 100644
--- a/drivers/gpu/drm/Kconfig
+++ b/drivers/gpu/drm/Kconfig
@@ -71,6 +71,7 @@ config DRM_DEBUG_SELFTEST
select DRM_DP_HELPER
select DRM_LIB_RANDOM
select DRM_KMS_HELPER
+   select DRM_BUDDY
select DRM_EXPORT_FOR_TESTS if m
default n
help
diff --git a/drivers/gpu/drm/selftests/Makefile 
b/drivers/gpu/drm/selftests/Makefile
index 0856e4b12f70..5ba5f9138c95 100644
--- a/drivers/gpu/drm/selftests/Makefile
+++ b/drivers/gpu/drm/selftests/Makefile
@@ -4,4 +4,5 @@ test-drm_modeset-y := test-drm_modeset_common.o 
test-drm_plane_helper.o \
  test-drm_damage_helper.o test-drm_dp_mst_helper.o \
  test-drm_rect.o
  
-obj-$(CONFIG_DRM_DEBUG_SELFTEST) += test-drm_mm.o test-drm_modeset.o test-drm_cmdline_parser.o

+obj-$(CONFIG_DRM_DEBUG_SELFTEST) += test-drm_mm.o test-drm_modeset.o 
test-drm_cmdline_parser.o \
+   test-drm_buddy.o
diff --git a/drivers/gpu/drm/selftests/drm_buddy_selftests.h 
b/drivers/gpu/drm/selftests/drm_buddy_selftests.h
new file mode 100644
index ..a4bcf3a6dfe3
--- /dev/null
+++ b/drivers/gpu/drm/selftests/drm_buddy_selftests.h
@@ -0,0 +1,9 @@
+/* SPDX-License-Identifier: GPL-2.0 */
+/* List each unit test as selftest(name, function)
+ *
+ * The name is used as both an enum and expanded as igt__name to create
+ * a module parameter. It must be unique and legal for a C identifier.
+ *
+ * Tests are executed in order by igt/drm_buddy
+ */
+selftest(sanitycheck, igt_sanitycheck) /* keep first (selfcheck for igt) */
diff --git a/drivers/gpu/drm/selftests/test-drm_buddy.c 
b/drivers/gpu/drm/selftests/test-drm_buddy.c
new file mode 100644
index ..51e4d393d22c
--- /dev/null
+++ b/drivers/gpu/drm/selftests/test-drm_buddy.c
@@ -0,0 +1,49 @@
+// SPDX-License-Identifier: MIT
+/*
+ * Copyright © 2019 Intel Corporation
+ */
+
+#define pr_fmt(fmt) "drm_buddy: " fmt
+
+#include 
+
+#include 
+
+#include "../lib/drm_random.h"
+
+#define TESTS "drm_buddy_selftests.h"
+#include "drm_selftest.h"
+
+static unsigned int random_seed;
+
+static int igt_sanitycheck(void *ignored)
+{
+   pr_info("%s - ok!\n", __func__);
+   return 0;
+}
+
+#include "drm_selftest.c"
+
+static int __init test_drm_buddy_init(void)
+{
+   int err;
+
+   while (!random_seed)
+   random_seed = get_random_int();
+
+   pr_info("Testing DRM buddy manager (struct drm_buddy), with 
random_seed=0x%x\n",
+   random_seed);
+   err = run_selftests(selftests, ARRAY_SIZE(selftests), NULL);
+
+   return err > 0 ? 0 : err;
+}
+
+static void __exit test_drm_buddy_exit(void)
+{
+}
+
+module_init(test_drm_buddy_init);
+module_exit(test_drm_buddy_exit);
+
+MODULE_AUTHOR("Intel Corporation");
+MODULE_LICENSE("GPL");




RE: [Patch v5 15/24] drm/amdkfd: CRIU implement gpu_id remapping

2022-02-03 Thread Yat Sin, David
One nit pick.
Regards,
David


@@ -673,15 +693,19 @@ static int kfd_ioctl_dbg_address_watch(struct file *filep,
 
memset((void *) _info, 0, sizeof(struct dbg_address_watch_info));
 
-   dev = kfd_device_by_id(args->gpu_id);
-   if (!dev)
+   mutex_lock(>mutex);
+   pdd = kfd_process_device_data_by_id(p, args->gpu_id);
+   mutex_unlock(>mutex);
+   if (!pdd) {
+   pr_debug("Could not find gpu id 0x%x\n", args->gpu_id);
return -EINVAL;
+   }
+   dev = pdd->dev;
 
if (dev->adev->asic_type == CHIP_CARRIZO) {
pr_debug("kfd_ioctl_dbg_wave_control not supported on CZ\n");
return -EINVAL;
}
-
Unnecessary extra line

cmd_from_user = (void __user *) args->content_ptr;
 
/* Validate arguments */



binary constants (was: Re: [PATCH v3] drm/dp: Add Additional DP2 Headers)

2022-02-03 Thread Jani Nikula
On Mon, 27 Sep 2021, Fangzhi Zuo  wrote:
> +/* DSC Extended Capability Branch Total DSC Resources */
> +#define DP_DSC_SUPPORT_AND_DSC_DECODER_COUNT 0x2260  /* 2.0 */
> +# define DP_DSC_DECODER_COUNT_MASK   (0b111 << 5)
> +# define DP_DSC_DECODER_COUNT_SHIFT  5
> +#define DP_DSC_MAX_SLICE_COUNT_AND_AGGREGATION_0 0x2270  /* 2.0 */
> +# define DP_DSC_DECODER_0_MAXIMUM_SLICE_COUNT_MASK   (1 << 0)
> +# define DP_DSC_DECODER_0_AGGREGATION_SUPPORT_MASK   (0b111 << 1)
> +# define DP_DSC_DECODER_0_AGGREGATION_SUPPORT_SHIFT  1

The patch was merged a while back, but only now I noticed the use of
binary constants, which in C is a GCC and Clang extension [1][2]. There
are some instances in the kernel, but not a whole lot.

Do we want to avoid or embrace them going forward? Or meh?


BR,
Jani.


[1] https://gcc.gnu.org/onlinedocs/gcc/Binary-constants.html
[2] https://clang.llvm.org/docs/LanguageExtensions.html#c-14-binary-literals

-- 
Jani Nikula, Intel Open Source Graphics Center


[PATCH 7/7] drm/selftests: add drm buddy pathological testcase

2022-02-03 Thread Arunpravin
create a pot-sized mm, then allocate one of each possible
order within. This should leave the mm with exactly one
page left. Free the largest block, then whittle down again.
Eventually we will have a fully 50% fragmented mm.

Signed-off-by: Arunpravin 
---
 .../gpu/drm/selftests/drm_buddy_selftests.h   |   1 +
 drivers/gpu/drm/selftests/test-drm_buddy.c| 136 ++
 2 files changed, 137 insertions(+)

diff --git a/drivers/gpu/drm/selftests/drm_buddy_selftests.h 
b/drivers/gpu/drm/selftests/drm_buddy_selftests.h
index 411d072cbfc5..455b756c4ae5 100644
--- a/drivers/gpu/drm/selftests/drm_buddy_selftests.h
+++ b/drivers/gpu/drm/selftests/drm_buddy_selftests.h
@@ -12,3 +12,4 @@ selftest(buddy_alloc_range, igt_buddy_alloc_range)
 selftest(buddy_alloc_optimistic, igt_buddy_alloc_optimistic)
 selftest(buddy_alloc_pessimistic, igt_buddy_alloc_pessimistic)
 selftest(buddy_alloc_smoke, igt_buddy_alloc_smoke)
+selftest(buddy_alloc_pathological, igt_buddy_alloc_pathological)
diff --git a/drivers/gpu/drm/selftests/test-drm_buddy.c 
b/drivers/gpu/drm/selftests/test-drm_buddy.c
index 2074e8c050a4..b2d0313a4bc5 100644
--- a/drivers/gpu/drm/selftests/test-drm_buddy.c
+++ b/drivers/gpu/drm/selftests/test-drm_buddy.c
@@ -338,6 +338,142 @@ static void igt_mm_config(u64 *size, u64 *chunk_size)
*size = (u64)s << 12;
 }
 
+static int igt_buddy_alloc_pathological(void *arg)
+{
+   u64 mm_size, size, min_page_size, start = 0;
+   struct drm_buddy_block *block;
+   const int max_order = 3;
+   unsigned long flags = 0;
+   int order, top, err;
+   struct drm_buddy mm;
+   LIST_HEAD(blocks);
+   LIST_HEAD(holes);
+   LIST_HEAD(tmp);
+
+   /*
+* Create a pot-sized mm, then allocate one of each possible
+* order within. This should leave the mm with exactly one
+* page left. Free the largest block, then whittle down again.
+* Eventually we will have a fully 50% fragmented mm.
+*/
+
+   mm_size = PAGE_SIZE << max_order;
+   err = drm_buddy_init(, mm_size, PAGE_SIZE);
+   if (err) {
+   pr_err("buddy_init failed(%d)\n", err);
+   return err;
+   }
+   BUG_ON(mm.max_order != max_order);
+
+   for (top = max_order; top; top--) {
+   /* Make room by freeing the largest allocated block */
+   block = list_first_entry_or_null(, typeof(*block), link);
+   if (block) {
+   list_del(>link);
+   drm_buddy_free_block(, block);
+   }
+
+   for (order = top; order--; ) {
+   size = min_page_size = get_size(order, PAGE_SIZE);
+   err = drm_buddy_alloc_blocks(, start, mm_size, size,
+min_page_size, , 
flags);
+   if (err) {
+   pr_info("buddy_alloc hit -ENOMEM with order=%d, 
top=%d\n",
+   order, top);
+   goto err;
+   }
+
+   block = list_first_entry_or_null(,
+struct drm_buddy_block,
+link);
+   if (!block) {
+   pr_err("alloc_blocks has no blocks\n");
+   err = -EINVAL;
+   goto err;
+   }
+
+   list_del(>link);
+   list_add_tail(>link, );
+   }
+
+   /* There should be one final page for this sub-allocation */
+   size = min_page_size = get_size(0, PAGE_SIZE);
+   err = drm_buddy_alloc_blocks(, start, mm_size, size, 
min_page_size, , flags);
+   if (err) {
+   pr_info("buddy_alloc hit -ENOME for hole\n");
+   goto err;
+   }
+
+   block = list_first_entry_or_null(,
+struct drm_buddy_block,
+link);
+   if (!block) {
+   pr_err("alloc_blocks has no blocks\n");
+   err = -EINVAL;
+   goto err;
+   }
+
+   list_del(>link);
+   list_add_tail(>link, );
+
+   size = min_page_size = get_size(top, PAGE_SIZE);
+   err = drm_buddy_alloc_blocks(, start, mm_size, size, 
min_page_size, , flags);
+   if (!err) {
+   pr_info("buddy_alloc unexpectedly succeeded at 
top-order %d/%d, it should be full!",
+   top, max_order);
+   block = list_first_entry_or_null(,
+struct drm_buddy_block,
+

[PATCH 6/7] drm/selftests: add drm buddy smoke testcase

2022-02-03 Thread Arunpravin
- add a test to ascertain that the critical functionalities
  of the program is working fine
- add a timeout helper function

Signed-off-by: Arunpravin 
---
 .../gpu/drm/selftests/drm_buddy_selftests.h   |   1 +
 drivers/gpu/drm/selftests/test-drm_buddy.c| 143 ++
 2 files changed, 144 insertions(+)

diff --git a/drivers/gpu/drm/selftests/drm_buddy_selftests.h 
b/drivers/gpu/drm/selftests/drm_buddy_selftests.h
index b14f04a1de19..411d072cbfc5 100644
--- a/drivers/gpu/drm/selftests/drm_buddy_selftests.h
+++ b/drivers/gpu/drm/selftests/drm_buddy_selftests.h
@@ -11,3 +11,4 @@ selftest(buddy_alloc_limit, igt_buddy_alloc_limit)
 selftest(buddy_alloc_range, igt_buddy_alloc_range)
 selftest(buddy_alloc_optimistic, igt_buddy_alloc_optimistic)
 selftest(buddy_alloc_pessimistic, igt_buddy_alloc_pessimistic)
+selftest(buddy_alloc_smoke, igt_buddy_alloc_smoke)
diff --git a/drivers/gpu/drm/selftests/test-drm_buddy.c 
b/drivers/gpu/drm/selftests/test-drm_buddy.c
index e97f583ed0cd..2074e8c050a4 100644
--- a/drivers/gpu/drm/selftests/test-drm_buddy.c
+++ b/drivers/gpu/drm/selftests/test-drm_buddy.c
@@ -7,6 +7,7 @@
 
 #include 
 #include 
+#include 
 
 #include 
 
@@ -15,6 +16,9 @@
 #define TESTS "drm_buddy_selftests.h"
 #include "drm_selftest.h"
 
+#define IGT_TIMEOUT(name__) \
+   unsigned long name__ = jiffies + MAX_SCHEDULE_TIMEOUT
+
 static unsigned int random_seed;
 
 static inline u64 get_size(int order, u64 chunk_size)
@@ -22,6 +26,26 @@ static inline u64 get_size(int order, u64 chunk_size)
return (1 << order) * chunk_size;
 }
 
+__printf(2, 3)
+static bool __igt_timeout(unsigned long timeout, const char *fmt, ...)
+{
+   va_list va;
+
+   if (!signal_pending(current)) {
+   cond_resched();
+   if (time_before(jiffies, timeout))
+   return false;
+   }
+
+   if (fmt) {
+   va_start(va, fmt);
+   vprintk(fmt, va);
+   va_end(va);
+   }
+
+   return true;
+}
+
 static inline const char *yesno(bool v)
 {
return v ? "yes" : "no";
@@ -314,6 +338,125 @@ static void igt_mm_config(u64 *size, u64 *chunk_size)
*size = (u64)s << 12;
 }
 
+static int igt_buddy_alloc_smoke(void *arg)
+{
+   u64 mm_size, min_page_size, chunk_size, start = 0;
+   unsigned long flags = 0;
+   struct drm_buddy mm;
+   int *order;
+   int err, i;
+
+   DRM_RND_STATE(prng, random_seed);
+   IGT_TIMEOUT(end_time);
+
+   igt_mm_config(_size, _size);
+
+   err = drm_buddy_init(, mm_size, chunk_size);
+   if (err) {
+   pr_err("buddy_init failed(%d)\n", err);
+   return err;
+   }
+
+   order = drm_random_order(mm.max_order + 1, );
+   if (!order)
+   goto out_fini;
+
+   for (i = 0; i <= mm.max_order; ++i) {
+   struct drm_buddy_block *block;
+   int max_order = order[i];
+   bool timeout = false;
+   LIST_HEAD(blocks);
+   u64 total, size;
+   LIST_HEAD(tmp);
+   int order;
+
+   err = igt_check_mm();
+   if (err) {
+   pr_err("pre-mm check failed, abort\n");
+   break;
+   }
+
+   order = max_order;
+   total = 0;
+
+   do {
+retry:
+   size = min_page_size = get_size(order, chunk_size);
+   err = drm_buddy_alloc_blocks(, start, mm_size, size,
+min_page_size, , 
flags);
+   if (err) {
+   if (err == -ENOMEM) {
+   pr_info("buddy_alloc hit -ENOMEM with 
order=%d\n",
+   order);
+   } else {
+   if (order--) {
+   err = 0;
+   goto retry;
+   }
+
+   pr_err("buddy_alloc with order=%d 
failed(%d)\n",
+  order, err);
+   }
+
+   break;
+   }
+
+   block = list_first_entry_or_null(,
+struct drm_buddy_block,
+link);
+   if (!block) {
+   pr_err("alloc_blocks has no blocks\n");
+   err = -EINVAL;
+   break;
+   }
+
+   list_del(>link);
+   list_add_tail(>link, );
+
+   if (drm_buddy_block_order(block) != order) {
+   

[PATCH 5/7] drm/selftests: add drm buddy pessimistic testcase

2022-02-03 Thread Arunpravin
create a pot-sized mm, then allocate one of each possible
order within. This should leave the mm with exactly one
page left.

Signed-off-by: Arunpravin 
---
 .../gpu/drm/selftests/drm_buddy_selftests.h   |   1 +
 drivers/gpu/drm/selftests/test-drm_buddy.c| 153 ++
 2 files changed, 154 insertions(+)

diff --git a/drivers/gpu/drm/selftests/drm_buddy_selftests.h 
b/drivers/gpu/drm/selftests/drm_buddy_selftests.h
index 21a6bd38864f..b14f04a1de19 100644
--- a/drivers/gpu/drm/selftests/drm_buddy_selftests.h
+++ b/drivers/gpu/drm/selftests/drm_buddy_selftests.h
@@ -10,3 +10,4 @@ selftest(sanitycheck, igt_sanitycheck) /* keep first 
(selfcheck for igt) */
 selftest(buddy_alloc_limit, igt_buddy_alloc_limit)
 selftest(buddy_alloc_range, igt_buddy_alloc_range)
 selftest(buddy_alloc_optimistic, igt_buddy_alloc_optimistic)
+selftest(buddy_alloc_pessimistic, igt_buddy_alloc_pessimistic)
diff --git a/drivers/gpu/drm/selftests/test-drm_buddy.c 
b/drivers/gpu/drm/selftests/test-drm_buddy.c
index b193d9556fb4..e97f583ed0cd 100644
--- a/drivers/gpu/drm/selftests/test-drm_buddy.c
+++ b/drivers/gpu/drm/selftests/test-drm_buddy.c
@@ -314,6 +314,159 @@ static void igt_mm_config(u64 *size, u64 *chunk_size)
*size = (u64)s << 12;
 }
 
+static int igt_buddy_alloc_pessimistic(void *arg)
+{
+   u64 mm_size, size, min_page_size, start = 0;
+   struct drm_buddy_block *block, *bn;
+   const unsigned int max_order = 16;
+   unsigned long flags = 0;
+   struct drm_buddy mm;
+   unsigned int order;
+   LIST_HEAD(blocks);
+   LIST_HEAD(tmp);
+   int err;
+
+   /*
+* Create a pot-sized mm, then allocate one of each possible
+* order within. This should leave the mm with exactly one
+* page left.
+*/
+
+   mm_size = PAGE_SIZE << max_order;
+   err = drm_buddy_init(, mm_size, PAGE_SIZE);
+   if (err) {
+   pr_err("buddy_init failed(%d)\n", err);
+   return err;
+   }
+   BUG_ON(mm.max_order != max_order);
+
+   for (order = 0; order < max_order; order++) {
+   size = min_page_size = get_size(order, PAGE_SIZE);
+   err = drm_buddy_alloc_blocks(, start, mm_size, size, 
min_page_size, , flags);
+   if (err) {
+   pr_info("buddy_alloc hit -ENOMEM with order=%d\n",
+   order);
+   goto err;
+   }
+
+   block = list_first_entry_or_null(,
+struct drm_buddy_block,
+link);
+   if (!block) {
+   pr_err("alloc_blocks has no blocks\n");
+   err = -EINVAL;
+   goto err;
+   }
+
+   list_del(>link);
+   list_add_tail(>link, );
+   }
+
+   /* And now the last remaining block available */
+   size = min_page_size = get_size(0, PAGE_SIZE);
+   err = drm_buddy_alloc_blocks(, start, mm_size, size, min_page_size, 
, flags);
+   if (err) {
+   pr_info("buddy_alloc hit -ENOMEM on final alloc\n");
+   goto err;
+   }
+
+   block = list_first_entry_or_null(,
+struct drm_buddy_block,
+link);
+   if (!block) {
+   pr_err("alloc_blocks has no blocks\n");
+   err = -EINVAL;
+   goto err;
+   }
+
+   list_del(>link);
+   list_add_tail(>link, );
+
+   /* Should be completely full! */
+   for (order = max_order; order--; ) {
+   size = min_page_size = get_size(order, PAGE_SIZE);
+   err = drm_buddy_alloc_blocks(, start, mm_size, size, 
min_page_size, , flags);
+   if (!err) {
+   pr_info("buddy_alloc unexpectedly succeeded at order 
%d, it should be full!",
+   order);
+   block = list_first_entry_or_null(,
+struct drm_buddy_block,
+link);
+   if (!block) {
+   pr_err("alloc_blocks has no blocks\n");
+   err = -EINVAL;
+   goto err;
+   }
+
+   list_del(>link);
+   list_add_tail(>link, );
+   err = -EINVAL;
+   goto err;
+   }
+   }
+
+   block = list_last_entry(, typeof(*block), link);
+   list_del(>link);
+   drm_buddy_free_block(, block);
+
+   /* As we free in increasing size, we make available larger blocks */
+   order = 1;
+   list_for_each_entry_safe(block, bn, , link) {
+   list_del(>link);
+   drm_buddy_free_block(, block);
+
+

[PATCH 3/7] drm/selftests: add drm buddy alloc range testcase

2022-02-03 Thread Arunpravin
- add a test to check the range allocation
- export get_buddy() function in drm_buddy.c
- export drm_prandom_u32_max_state() in lib/drm_random.c
- include helper functions
- include prime number header file

Signed-off-by: Arunpravin 
---
 drivers/gpu/drm/drm_buddy.c   |  20 +-
 drivers/gpu/drm/lib/drm_random.c  |   3 +-
 drivers/gpu/drm/lib/drm_random.h  |   2 +
 .../gpu/drm/selftests/drm_buddy_selftests.h   |   1 +
 drivers/gpu/drm/selftests/test-drm_buddy.c| 390 ++
 include/drm/drm_buddy.h   |   3 +
 6 files changed, 414 insertions(+), 5 deletions(-)

diff --git a/drivers/gpu/drm/drm_buddy.c b/drivers/gpu/drm/drm_buddy.c
index 4845ef784b5e..501229d843c4 100644
--- a/drivers/gpu/drm/drm_buddy.c
+++ b/drivers/gpu/drm/drm_buddy.c
@@ -211,7 +211,7 @@ static int split_block(struct drm_buddy *mm,
 }
 
 static struct drm_buddy_block *
-get_buddy(struct drm_buddy_block *block)
+__get_buddy(struct drm_buddy_block *block)
 {
struct drm_buddy_block *parent;
 
@@ -225,6 +225,18 @@ get_buddy(struct drm_buddy_block *block)
return parent->left;
 }
 
+/**
+ * drm_get_buddy - get buddy address
+ *
+ * @block: DRM buddy block
+ */
+struct drm_buddy_block *
+drm_get_buddy(struct drm_buddy_block *block)
+{
+   return __get_buddy(block);
+}
+EXPORT_SYMBOL(drm_get_buddy);
+
 static void __drm_buddy_free(struct drm_buddy *mm,
 struct drm_buddy_block *block)
 {
@@ -233,7 +245,7 @@ static void __drm_buddy_free(struct drm_buddy *mm,
while ((parent = block->parent)) {
struct drm_buddy_block *buddy;
 
-   buddy = get_buddy(block);
+   buddy = __get_buddy(block);
 
if (!drm_buddy_block_is_free(buddy))
break;
@@ -361,7 +373,7 @@ alloc_range_bias(struct drm_buddy *mm,
 * bigger is better, so make sure we merge everything back before we
 * free the allocated blocks.
 */
-   buddy = get_buddy(block);
+   buddy = __get_buddy(block);
if (buddy &&
(drm_buddy_block_is_free(block) &&
 drm_buddy_block_is_free(buddy)))
@@ -500,7 +512,7 @@ static int __alloc_range(struct drm_buddy *mm,
 * bigger is better, so make sure we merge everything back before we
 * free the allocated blocks.
 */
-   buddy = get_buddy(block);
+   buddy = __get_buddy(block);
if (buddy &&
(drm_buddy_block_is_free(block) &&
 drm_buddy_block_is_free(buddy)))
diff --git a/drivers/gpu/drm/lib/drm_random.c b/drivers/gpu/drm/lib/drm_random.c
index eeb155826d27..31b5a3e21911 100644
--- a/drivers/gpu/drm/lib/drm_random.c
+++ b/drivers/gpu/drm/lib/drm_random.c
@@ -7,10 +7,11 @@
 
 #include "drm_random.h"
 
-static inline u32 drm_prandom_u32_max_state(u32 ep_ro, struct rnd_state *state)
+u32 drm_prandom_u32_max_state(u32 ep_ro, struct rnd_state *state)
 {
return upper_32_bits((u64)prandom_u32_state(state) * ep_ro);
 }
+EXPORT_SYMBOL(drm_prandom_u32_max_state);
 
 void drm_random_reorder(unsigned int *order, unsigned int count,
struct rnd_state *state)
diff --git a/drivers/gpu/drm/lib/drm_random.h b/drivers/gpu/drm/lib/drm_random.h
index 4a3e94dfa0c0..5543bf0474bc 100644
--- a/drivers/gpu/drm/lib/drm_random.h
+++ b/drivers/gpu/drm/lib/drm_random.h
@@ -22,5 +22,7 @@ unsigned int *drm_random_order(unsigned int count,
 void drm_random_reorder(unsigned int *order,
unsigned int count,
struct rnd_state *state);
+u32 drm_prandom_u32_max_state(u32 ep_ro,
+ struct rnd_state *state);
 
 #endif /* !__DRM_RANDOM_H__ */
diff --git a/drivers/gpu/drm/selftests/drm_buddy_selftests.h 
b/drivers/gpu/drm/selftests/drm_buddy_selftests.h
index ebe16162762f..3230bfd2770b 100644
--- a/drivers/gpu/drm/selftests/drm_buddy_selftests.h
+++ b/drivers/gpu/drm/selftests/drm_buddy_selftests.h
@@ -8,3 +8,4 @@
  */
 selftest(sanitycheck, igt_sanitycheck) /* keep first (selfcheck for igt) */
 selftest(buddy_alloc_limit, igt_buddy_alloc_limit)
+selftest(buddy_alloc_range, igt_buddy_alloc_range)
diff --git a/drivers/gpu/drm/selftests/test-drm_buddy.c 
b/drivers/gpu/drm/selftests/test-drm_buddy.c
index fd7d1a112458..e347060c05a2 100644
--- a/drivers/gpu/drm/selftests/test-drm_buddy.c
+++ b/drivers/gpu/drm/selftests/test-drm_buddy.c
@@ -6,6 +6,7 @@
 #define pr_fmt(fmt) "drm_buddy: " fmt
 
 #include 
+#include 
 
 #include 
 
@@ -16,6 +17,395 @@
 
 static unsigned int random_seed;
 
+static inline const char *yesno(bool v)
+{
+   return v ? "yes" : "no";
+}
+
+static void __igt_dump_block(struct drm_buddy *mm,
+struct drm_buddy_block *block,
+bool buddy)
+{
+   pr_err("block info: header=%llx, state=%u, order=%d, offset=%llx 
size=%llx root=%s buddy=%s\n",
+  block->header,
+  

[PATCH 4/7] drm/selftests: add drm buddy optimistic testcase

2022-02-03 Thread Arunpravin
create a mm with one block of each order available, and
try to allocate them all.

Signed-off-by: Arunpravin 
---
 .../gpu/drm/selftests/drm_buddy_selftests.h   |  1 +
 drivers/gpu/drm/selftests/test-drm_buddy.c| 82 +++
 2 files changed, 83 insertions(+)

diff --git a/drivers/gpu/drm/selftests/drm_buddy_selftests.h 
b/drivers/gpu/drm/selftests/drm_buddy_selftests.h
index 3230bfd2770b..21a6bd38864f 100644
--- a/drivers/gpu/drm/selftests/drm_buddy_selftests.h
+++ b/drivers/gpu/drm/selftests/drm_buddy_selftests.h
@@ -9,3 +9,4 @@
 selftest(sanitycheck, igt_sanitycheck) /* keep first (selfcheck for igt) */
 selftest(buddy_alloc_limit, igt_buddy_alloc_limit)
 selftest(buddy_alloc_range, igt_buddy_alloc_range)
+selftest(buddy_alloc_optimistic, igt_buddy_alloc_optimistic)
diff --git a/drivers/gpu/drm/selftests/test-drm_buddy.c 
b/drivers/gpu/drm/selftests/test-drm_buddy.c
index e347060c05a2..b193d9556fb4 100644
--- a/drivers/gpu/drm/selftests/test-drm_buddy.c
+++ b/drivers/gpu/drm/selftests/test-drm_buddy.c
@@ -17,6 +17,11 @@
 
 static unsigned int random_seed;
 
+static inline u64 get_size(int order, u64 chunk_size)
+{
+   return (1 << order) * chunk_size;
+}
+
 static inline const char *yesno(bool v)
 {
return v ? "yes" : "no";
@@ -309,6 +314,83 @@ static void igt_mm_config(u64 *size, u64 *chunk_size)
*size = (u64)s << 12;
 }
 
+static int igt_buddy_alloc_optimistic(void *arg)
+{
+   u64 mm_size, size, min_page_size, start = 0;
+   struct drm_buddy_block *block;
+   unsigned long flags = 0;
+   const int max_order = 16;
+   struct drm_buddy mm;
+   LIST_HEAD(blocks);
+   LIST_HEAD(tmp);
+   int order, err;
+
+   /*
+* Create a mm with one block of each order available, and
+* try to allocate them all.
+*/
+
+   mm_size = PAGE_SIZE * ((1 << (max_order + 1)) - 1);
+   err = drm_buddy_init(,
+mm_size,
+PAGE_SIZE);
+   if (err) {
+   pr_err("buddy_init failed(%d)\n", err);
+   return err;
+   }
+
+   BUG_ON(mm.max_order != max_order);
+
+   for (order = 0; order <= max_order; order++) {
+   size = min_page_size = get_size(order, PAGE_SIZE);
+   err = drm_buddy_alloc_blocks(, start, mm_size, size, 
min_page_size, , flags);
+   if (err) {
+   pr_info("buddy_alloc hit -ENOMEM with order=%d\n",
+   order);
+   goto err;
+   }
+
+   block = list_first_entry_or_null(,
+struct drm_buddy_block,
+link);
+   if (!block) {
+   pr_err("alloc_blocks has no blocks\n");
+   err = -EINVAL;
+   goto err;
+   }
+
+   list_del(>link);
+   list_add_tail(>link, );
+   }
+
+   /* Should be completely full! */
+   size = min_page_size = get_size(0, PAGE_SIZE);
+   err = drm_buddy_alloc_blocks(, start, mm_size, size, min_page_size, 
, flags);
+   if (!err) {
+   pr_info("buddy_alloc unexpectedly succeeded, it should be 
full!");
+   block = list_first_entry_or_null(,
+struct drm_buddy_block,
+link);
+   if (!block) {
+   pr_err("alloc_blocks has no blocks\n");
+   err = -EINVAL;
+   goto err;
+   }
+
+   list_del(>link);
+   list_add_tail(>link, );
+   goto err;
+   } else {
+   pr_info("%s - succeeded\n", __func__);
+   err = 0;
+   }
+
+err:
+   drm_buddy_free_list(, );
+   drm_buddy_fini();
+   return err;
+}
+
 static int igt_buddy_alloc_range(void *arg)
 {
unsigned long flags = DRM_BUDDY_RANGE_ALLOCATION;
-- 
2.25.1



[PATCH 2/7] drm/selftests: add drm buddy alloc limit testcase

2022-02-03 Thread Arunpravin
add a test to check the maximum allocation limit

Signed-off-by: Arunpravin 
---
 .../gpu/drm/selftests/drm_buddy_selftests.h   |  1 +
 drivers/gpu/drm/selftests/test-drm_buddy.c| 60 +++
 2 files changed, 61 insertions(+)

diff --git a/drivers/gpu/drm/selftests/drm_buddy_selftests.h 
b/drivers/gpu/drm/selftests/drm_buddy_selftests.h
index a4bcf3a6dfe3..ebe16162762f 100644
--- a/drivers/gpu/drm/selftests/drm_buddy_selftests.h
+++ b/drivers/gpu/drm/selftests/drm_buddy_selftests.h
@@ -7,3 +7,4 @@
  * Tests are executed in order by igt/drm_buddy
  */
 selftest(sanitycheck, igt_sanitycheck) /* keep first (selfcheck for igt) */
+selftest(buddy_alloc_limit, igt_buddy_alloc_limit)
diff --git a/drivers/gpu/drm/selftests/test-drm_buddy.c 
b/drivers/gpu/drm/selftests/test-drm_buddy.c
index 51e4d393d22c..fd7d1a112458 100644
--- a/drivers/gpu/drm/selftests/test-drm_buddy.c
+++ b/drivers/gpu/drm/selftests/test-drm_buddy.c
@@ -16,6 +16,66 @@
 
 static unsigned int random_seed;
 
+static int igt_buddy_alloc_limit(void *arg)
+{
+   u64 end, size = U64_MAX, start = 0;
+   struct drm_buddy_block *block;
+   unsigned long flags = 0;
+   LIST_HEAD(allocated);
+   struct drm_buddy mm;
+   int err;
+
+   size = end = round_down(size, 4096);
+   err = drm_buddy_init(, size, PAGE_SIZE);
+   if (err)
+   return err;
+
+   if (mm.max_order != DRM_BUDDY_MAX_ORDER) {
+   pr_err("mm.max_order(%d) != %d\n",
+  mm.max_order, DRM_BUDDY_MAX_ORDER);
+   err = -EINVAL;
+   goto out_fini;
+   }
+
+   err = drm_buddy_alloc_blocks(, start, end, size,
+PAGE_SIZE, , flags);
+
+   if (unlikely(err))
+   goto out_free;
+
+   block = list_first_entry_or_null(,
+struct drm_buddy_block,
+link);
+
+   if (!block)
+   goto out_fini;
+
+   if (drm_buddy_block_order(block) != mm.max_order) {
+   pr_err("block order(%d) != %d\n",
+  drm_buddy_block_order(block), mm.max_order);
+   err = -EINVAL;
+   goto out_free;
+   }
+
+   if (drm_buddy_block_size(, block) !=
+   BIT_ULL(mm.max_order) * PAGE_SIZE) {
+   pr_err("block size(%llu) != %llu\n",
+  drm_buddy_block_size(, block),
+  BIT_ULL(mm.max_order) * PAGE_SIZE);
+   err = -EINVAL;
+   goto out_free;
+   }
+
+   if (!err)
+   pr_info("%s - succeeded\n", __func__);
+
+out_free:
+   drm_buddy_free_list(, );
+out_fini:
+   drm_buddy_fini();
+   return err;
+}
+
 static int igt_sanitycheck(void *ignored)
 {
pr_info("%s - ok!\n", __func__);
-- 
2.25.1



[PATCH 1/7] drm/selftests: Move i915 buddy selftests into drm

2022-02-03 Thread Arunpravin
- move i915 buddy selftests into drm selftests folder
- add Makefile and Kconfig support
- add sanitycheck testcase

Prerequisites
- These series of selftests patches are created on top of
  drm buddy series
- Enable kselftests for DRM as a module in .config

Signed-off-by: Arunpravin 
---
 drivers/gpu/drm/Kconfig   |  1 +
 drivers/gpu/drm/selftests/Makefile|  3 +-
 .../gpu/drm/selftests/drm_buddy_selftests.h   |  9 
 drivers/gpu/drm/selftests/test-drm_buddy.c| 49 +++
 4 files changed, 61 insertions(+), 1 deletion(-)
 create mode 100644 drivers/gpu/drm/selftests/drm_buddy_selftests.h
 create mode 100644 drivers/gpu/drm/selftests/test-drm_buddy.c

diff --git a/drivers/gpu/drm/Kconfig b/drivers/gpu/drm/Kconfig
index eb5a57ae3c5c..ff856df3f97f 100644
--- a/drivers/gpu/drm/Kconfig
+++ b/drivers/gpu/drm/Kconfig
@@ -71,6 +71,7 @@ config DRM_DEBUG_SELFTEST
select DRM_DP_HELPER
select DRM_LIB_RANDOM
select DRM_KMS_HELPER
+   select DRM_BUDDY
select DRM_EXPORT_FOR_TESTS if m
default n
help
diff --git a/drivers/gpu/drm/selftests/Makefile 
b/drivers/gpu/drm/selftests/Makefile
index 0856e4b12f70..5ba5f9138c95 100644
--- a/drivers/gpu/drm/selftests/Makefile
+++ b/drivers/gpu/drm/selftests/Makefile
@@ -4,4 +4,5 @@ test-drm_modeset-y := test-drm_modeset_common.o 
test-drm_plane_helper.o \
  test-drm_damage_helper.o test-drm_dp_mst_helper.o \
  test-drm_rect.o
 
-obj-$(CONFIG_DRM_DEBUG_SELFTEST) += test-drm_mm.o test-drm_modeset.o 
test-drm_cmdline_parser.o
+obj-$(CONFIG_DRM_DEBUG_SELFTEST) += test-drm_mm.o test-drm_modeset.o 
test-drm_cmdline_parser.o \
+   test-drm_buddy.o
diff --git a/drivers/gpu/drm/selftests/drm_buddy_selftests.h 
b/drivers/gpu/drm/selftests/drm_buddy_selftests.h
new file mode 100644
index ..a4bcf3a6dfe3
--- /dev/null
+++ b/drivers/gpu/drm/selftests/drm_buddy_selftests.h
@@ -0,0 +1,9 @@
+/* SPDX-License-Identifier: GPL-2.0 */
+/* List each unit test as selftest(name, function)
+ *
+ * The name is used as both an enum and expanded as igt__name to create
+ * a module parameter. It must be unique and legal for a C identifier.
+ *
+ * Tests are executed in order by igt/drm_buddy
+ */
+selftest(sanitycheck, igt_sanitycheck) /* keep first (selfcheck for igt) */
diff --git a/drivers/gpu/drm/selftests/test-drm_buddy.c 
b/drivers/gpu/drm/selftests/test-drm_buddy.c
new file mode 100644
index ..51e4d393d22c
--- /dev/null
+++ b/drivers/gpu/drm/selftests/test-drm_buddy.c
@@ -0,0 +1,49 @@
+// SPDX-License-Identifier: MIT
+/*
+ * Copyright © 2019 Intel Corporation
+ */
+
+#define pr_fmt(fmt) "drm_buddy: " fmt
+
+#include 
+
+#include 
+
+#include "../lib/drm_random.h"
+
+#define TESTS "drm_buddy_selftests.h"
+#include "drm_selftest.h"
+
+static unsigned int random_seed;
+
+static int igt_sanitycheck(void *ignored)
+{
+   pr_info("%s - ok!\n", __func__);
+   return 0;
+}
+
+#include "drm_selftest.c"
+
+static int __init test_drm_buddy_init(void)
+{
+   int err;
+
+   while (!random_seed)
+   random_seed = get_random_int();
+
+   pr_info("Testing DRM buddy manager (struct drm_buddy), with 
random_seed=0x%x\n",
+   random_seed);
+   err = run_selftests(selftests, ARRAY_SIZE(selftests), NULL);
+
+   return err > 0 ? 0 : err;
+}
+
+static void __exit test_drm_buddy_exit(void)
+{
+}
+
+module_init(test_drm_buddy_init);
+module_exit(test_drm_buddy_exit);
+
+MODULE_AUTHOR("Intel Corporation");
+MODULE_LICENSE("GPL");
-- 
2.25.1



Re: binary constants (was: Re: [PATCH v3] drm/dp: Add Additional DP2 Headers)

2022-02-03 Thread Daniel Vetter
On Thu, Feb 3, 2022 at 12:58 PM Jani Nikula  wrote:
>
> On Mon, 27 Sep 2021, Fangzhi Zuo  wrote:
> > +/* DSC Extended Capability Branch Total DSC Resources */
> > +#define DP_DSC_SUPPORT_AND_DSC_DECODER_COUNT 0x2260  /* 2.0 */
> > +# define DP_DSC_DECODER_COUNT_MASK   (0b111 << 5)
> > +# define DP_DSC_DECODER_COUNT_SHIFT  5
> > +#define DP_DSC_MAX_SLICE_COUNT_AND_AGGREGATION_0 0x2270  /* 2.0 */
> > +# define DP_DSC_DECODER_0_MAXIMUM_SLICE_COUNT_MASK   (1 << 0)
> > +# define DP_DSC_DECODER_0_AGGREGATION_SUPPORT_MASK   (0b111 << 1)
> > +# define DP_DSC_DECODER_0_AGGREGATION_SUPPORT_SHIFT  1
>
> The patch was merged a while back, but only now I noticed the use of
> binary constants, which in C is a GCC and Clang extension [1][2]. There
> are some instances in the kernel, but not a whole lot.
>
> Do we want to avoid or embrace them going forward? Or meh?

$ git grep '\<0b[01]*\>'

Gives me almost exclusive hits in
- .rst files
- .S assembler files
- comments and strings

So I think probably not? I mean there's also BIT() and BIT_MASK()
macros and stuff like that, and reading small masks is pretty simple.
-Daniel


>
>
> BR,
> Jani.
>
>
> [1] https://gcc.gnu.org/onlinedocs/gcc/Binary-constants.html
> [2] https://clang.llvm.org/docs/LanguageExtensions.html#c-14-binary-literals
>
> --
> Jani Nikula, Intel Open Source Graphics Center



-- 
Daniel Vetter
Software Engineer, Intel Corporation
http://blog.ffwll.ch


[Patch v5 23/24] drm/amdkfd: CRIU resume shared virtual memory ranges

2022-02-03 Thread Rajneesh Bhardwaj
In CRIU resume stage, resume all the shared virtual memory ranges from
the data stored inside the resuming kfd process during CRIU restore
phase. Also setup xnack mode and free up the resources.

KFD_IOCTL_SVM_ATTR_CLR_FLAGS is not available for querying via get_attr
interface but we must clear the flags during restore as there might be
some default flags set when the prange is created. Also handle the
invalid PREFETCH atribute values saved during checkpoint by replacing
them with another dummy KFD_IOCTL_SVM_ATTR_SET_FLAGS attribute.

Signed-off-by: Rajneesh Bhardwaj 
---
 drivers/gpu/drm/amd/amdkfd/kfd_chardev.c |  10 +++
 drivers/gpu/drm/amd/amdkfd/kfd_svm.c | 102 +++
 drivers/gpu/drm/amd/amdkfd/kfd_svm.h |   6 ++
 3 files changed, 118 insertions(+)

diff --git a/drivers/gpu/drm/amd/amdkfd/kfd_chardev.c 
b/drivers/gpu/drm/amd/amdkfd/kfd_chardev.c
index c143f242a84d..64e3b4e3a712 100644
--- a/drivers/gpu/drm/amd/amdkfd/kfd_chardev.c
+++ b/drivers/gpu/drm/amd/amdkfd/kfd_chardev.c
@@ -2766,7 +2766,17 @@ static int criu_resume(struct file *filep,
}
 
mutex_lock(>mutex);
+   ret = kfd_criu_resume_svm(target);
+   if (ret) {
+   pr_err("kfd_criu_resume_svm failed for %i\n", args->pid);
+   goto exit;
+   }
+
ret =  amdgpu_amdkfd_criu_resume(target->kgd_process_info);
+   if (ret)
+   pr_err("amdgpu_amdkfd_criu_resume failed for %i\n", args->pid);
+
+exit:
mutex_unlock(>mutex);
 
kfd_unref_process(target);
diff --git a/drivers/gpu/drm/amd/amdkfd/kfd_svm.c 
b/drivers/gpu/drm/amd/amdkfd/kfd_svm.c
index 41ac049b3316..30ae21953da5 100644
--- a/drivers/gpu/drm/amd/amdkfd/kfd_svm.c
+++ b/drivers/gpu/drm/amd/amdkfd/kfd_svm.c
@@ -3487,6 +3487,108 @@ svm_range_get_attr(struct kfd_process *p, struct 
mm_struct *mm,
return 0;
 }
 
+int kfd_criu_resume_svm(struct kfd_process *p)
+{
+   struct kfd_ioctl_svm_attribute *set_attr = NULL;
+   int nattr_common = 4, nattr_accessibility = 1;
+   struct criu_svm_metadata *criu_svm_md = NULL;
+   struct svm_range_list *svms = >svms;
+   struct criu_svm_metadata *next = NULL;
+   uint32_t set_flags = 0x;
+   int i, j, num_attrs, ret = 0;
+   uint64_t set_attr_size;
+   struct mm_struct *mm;
+
+   if (list_empty(>criu_svm_metadata_list)) {
+   pr_debug("No SVM data from CRIU restore stage 2\n");
+   return ret;
+   }
+
+   mm = get_task_mm(p->lead_thread);
+   if (!mm) {
+   pr_err("failed to get mm for the target process\n");
+   return -ESRCH;
+   }
+
+   num_attrs = nattr_common + (nattr_accessibility * p->n_pdds);
+
+   i = j = 0;
+   list_for_each_entry(criu_svm_md, >criu_svm_metadata_list, list) {
+   pr_debug("criu_svm_md[%d]\n\tstart: 0x%llx size: 0x%llx 
(npages)\n",
+i, criu_svm_md->data.start_addr, 
criu_svm_md->data.size);
+
+   for (j = 0; j < num_attrs; j++) {
+   pr_debug("\ncriu_svm_md[%d]->attrs[%d].type : 0x%x 
\ncriu_svm_md[%d]->attrs[%d].value : 0x%x\n",
+i,j, criu_svm_md->data.attrs[j].type,
+i,j, criu_svm_md->data.attrs[j].value);
+   switch (criu_svm_md->data.attrs[j].type) {
+   /* During Checkpoint operation, the query for
+* KFD_IOCTL_SVM_ATTR_PREFETCH_LOC attribute might
+* return KFD_IOCTL_SVM_LOCATION_UNDEFINED if they were
+* not used by the range which was checkpointed. Care
+* must be taken to not restore with an invalid value
+* otherwise the gpuidx value will be invalid and
+* set_attr would eventually fail so just replace those
+* with another dummy attribute such as
+* KFD_IOCTL_SVM_ATTR_SET_FLAGS.
+*/
+   case KFD_IOCTL_SVM_ATTR_PREFETCH_LOC:
+   if (criu_svm_md->data.attrs[j].value ==
+   KFD_IOCTL_SVM_LOCATION_UNDEFINED) {
+   criu_svm_md->data.attrs[j].type =
+   KFD_IOCTL_SVM_ATTR_SET_FLAGS;
+   criu_svm_md->data.attrs[j].value = 0;
+   }
+   break;
+   case KFD_IOCTL_SVM_ATTR_SET_FLAGS:
+   set_flags = criu_svm_md->data.attrs[j].value;
+   break;
+   default:
+   break;
+   }
+   }
+
+   /* CLR_FLAGS is not available via get_attr during checkpoint but
+* it needs to be inserted before 

[Patch v5 20/24] drm/amdkfd: CRIU Discover svm ranges

2022-02-03 Thread Rajneesh Bhardwaj
A KFD process may contain a number of virtual address ranges for shared
virtual memory management and each such range can have many SVM
attributes spanning across various nodes within the process boundary.
This change reports the total number of such SVM ranges and
their total private data size by extending the PROCESS_INFO op of the the
CRIU IOCTL to discover the svm ranges in the target process and a future
patches brings in the required support for checkpoint and restore for
SVM ranges.

Signed-off-by: Rajneesh Bhardwaj 
---
 drivers/gpu/drm/amd/amdkfd/kfd_chardev.c | 12 +++--
 drivers/gpu/drm/amd/amdkfd/kfd_priv.h|  5 +-
 drivers/gpu/drm/amd/amdkfd/kfd_svm.c | 59 
 drivers/gpu/drm/amd/amdkfd/kfd_svm.h | 11 +
 4 files changed, 81 insertions(+), 6 deletions(-)

diff --git a/drivers/gpu/drm/amd/amdkfd/kfd_chardev.c 
b/drivers/gpu/drm/amd/amdkfd/kfd_chardev.c
index 3ec44f71307d..a755ea68a428 100644
--- a/drivers/gpu/drm/amd/amdkfd/kfd_chardev.c
+++ b/drivers/gpu/drm/amd/amdkfd/kfd_chardev.c
@@ -2099,10 +2099,9 @@ static int criu_get_process_object_info(struct 
kfd_process *p,
uint32_t *num_objects,
uint64_t *objs_priv_size)
 {
-   int ret;
-   uint64_t priv_size;
+   uint64_t queues_priv_data_size, svm_priv_data_size, priv_size;
uint32_t num_queues, num_events, num_svm_ranges;
-   uint64_t queues_priv_data_size;
+   int ret;
 
*num_devices = p->n_pdds;
*num_bos = get_process_num_bos(p);
@@ -2112,7 +2111,10 @@ static int criu_get_process_object_info(struct 
kfd_process *p,
return ret;
 
num_events = kfd_get_num_events(p);
-   num_svm_ranges = 0; /* TODO: Implement SVM-Ranges */
+
+   ret = svm_range_get_info(p, _svm_ranges, _priv_data_size);
+   if (ret)
+   return ret;
 
*num_objects = num_queues + num_events + num_svm_ranges;
 
@@ -2122,7 +2124,7 @@ static int criu_get_process_object_info(struct 
kfd_process *p,
priv_size += *num_bos * sizeof(struct kfd_criu_bo_priv_data);
priv_size += queues_priv_data_size;
priv_size += num_events * sizeof(struct 
kfd_criu_event_priv_data);
-   /* TODO: Add SVM ranges priv size */
+   priv_size += svm_priv_data_size;
*objs_priv_size = priv_size;
}
return 0;
diff --git a/drivers/gpu/drm/amd/amdkfd/kfd_priv.h 
b/drivers/gpu/drm/amd/amdkfd/kfd_priv.h
index 903ad4a263f0..715dd0d4fac5 100644
--- a/drivers/gpu/drm/amd/amdkfd/kfd_priv.h
+++ b/drivers/gpu/drm/amd/amdkfd/kfd_priv.h
@@ -1082,7 +1082,10 @@ enum kfd_criu_object_type {
 
 struct kfd_criu_svm_range_priv_data {
uint32_t object_type;
-   uint32_t reserved;
+   uint64_t start_addr;
+   uint64_t size;
+   /* Variable length array of attributes */
+   struct kfd_ioctl_svm_attribute attrs[0];
 };
 
 struct kfd_criu_queue_priv_data {
diff --git a/drivers/gpu/drm/amd/amdkfd/kfd_svm.c 
b/drivers/gpu/drm/amd/amdkfd/kfd_svm.c
index d34508f5e88b..64cd7712c098 100644
--- a/drivers/gpu/drm/amd/amdkfd/kfd_svm.c
+++ b/drivers/gpu/drm/amd/amdkfd/kfd_svm.c
@@ -3481,6 +3481,65 @@ svm_range_get_attr(struct kfd_process *p, struct 
mm_struct *mm,
return 0;
 }
 
+int svm_range_get_info(struct kfd_process *p, uint32_t *num_svm_ranges,
+  uint64_t *svm_priv_data_size)
+{
+   uint64_t total_size, accessibility_size, common_attr_size;
+   int nattr_common = 4, nattr_accessibility = 1;
+   int num_devices = p->n_pdds;
+   struct svm_range_list *svms;
+   struct svm_range *prange;
+   uint32_t count = 0;
+
+   *svm_priv_data_size = 0;
+
+   svms = >svms;
+   if (!svms)
+   return -EINVAL;
+
+   mutex_lock(>lock);
+   list_for_each_entry(prange, >list, list) {
+   pr_debug("prange: 0x%p start: 0x%lx\t npages: 0x%llx\t end: 
0x%llx\n",
+prange, prange->start, prange->npages,
+prange->start + prange->npages - 1);
+   count++;
+   }
+   mutex_unlock(>lock);
+
+   *num_svm_ranges = count;
+   /* Only the accessbility attributes need to be queried for all the gpus
+* individually, remaining ones are spanned across the entire process
+* regardless of the various gpu nodes. Of the remaining attributes,
+* KFD_IOCTL_SVM_ATTR_CLR_FLAGS need not be saved.
+*
+* KFD_IOCTL_SVM_ATTR_PREFERRED_LOC
+* KFD_IOCTL_SVM_ATTR_PREFETCH_LOC
+* KFD_IOCTL_SVM_ATTR_SET_FLAGS
+* KFD_IOCTL_SVM_ATTR_GRANULARITY
+*
+* ** ACCESSBILITY ATTRIBUTES **
+* (Considered as one, type is altered during query, value is gpuid)
+* KFD_IOCTL_SVM_ATTR_ACCESS
+* KFD_IOCTL_SVM_ATTR_ACCESS_IN_PLACE
+* KFD_IOCTL_SVM_ATTR_NO_ACCESS
+*/
+   if (*num_svm_ranges 

[Patch v5 13/24] drm/amdkfd: CRIU checkpoint and restore queue control stack

2022-02-03 Thread Rajneesh Bhardwaj
From: David Yat Sin 

Checkpoint contents of queue control stacks on CRIU dump and restore them
during CRIU restore.

Signed-off-by: David Yat Sin 
Signed-off-by: Rajneesh Bhardwaj 
---
 drivers/gpu/drm/amd/amdkfd/kfd_chardev.c  |  2 +-
 drivers/gpu/drm/amd/amdkfd/kfd_dbgdev.c   |  2 +-
 .../drm/amd/amdkfd/kfd_device_queue_manager.c | 22 ---
 .../drm/amd/amdkfd/kfd_device_queue_manager.h |  9 ++-
 drivers/gpu/drm/amd/amdkfd/kfd_mqd_manager.h  | 11 +++-
 .../gpu/drm/amd/amdkfd/kfd_mqd_manager_cik.c  | 13 ++--
 .../gpu/drm/amd/amdkfd/kfd_mqd_manager_v10.c  | 14 +++--
 .../gpu/drm/amd/amdkfd/kfd_mqd_manager_v9.c   | 29 +++--
 .../gpu/drm/amd/amdkfd/kfd_mqd_manager_vi.c   | 22 +--
 drivers/gpu/drm/amd/amdkfd/kfd_priv.h |  5 +-
 .../amd/amdkfd/kfd_process_queue_manager.c| 62 +--
 11 files changed, 138 insertions(+), 53 deletions(-)

diff --git a/drivers/gpu/drm/amd/amdkfd/kfd_chardev.c 
b/drivers/gpu/drm/amd/amdkfd/kfd_chardev.c
index 999672602252..608214ea634d 100644
--- a/drivers/gpu/drm/amd/amdkfd/kfd_chardev.c
+++ b/drivers/gpu/drm/amd/amdkfd/kfd_chardev.c
@@ -311,7 +311,7 @@ static int kfd_ioctl_create_queue(struct file *filep, 
struct kfd_process *p,
p->pasid,
dev->id);
 
-   err = pqm_create_queue(>pqm, dev, filep, _properties, _id, 
NULL, NULL,
+   err = pqm_create_queue(>pqm, dev, filep, _properties, _id, 
NULL, NULL, NULL,
_offset_in_process);
if (err != 0)
goto err_create_queue;
diff --git a/drivers/gpu/drm/amd/amdkfd/kfd_dbgdev.c 
b/drivers/gpu/drm/amd/amdkfd/kfd_dbgdev.c
index 3a5303ebcabf..8eca9ed3ab36 100644
--- a/drivers/gpu/drm/amd/amdkfd/kfd_dbgdev.c
+++ b/drivers/gpu/drm/amd/amdkfd/kfd_dbgdev.c
@@ -185,7 +185,7 @@ static int dbgdev_register_diq(struct kfd_dbgdev *dbgdev)
properties.type = KFD_QUEUE_TYPE_DIQ;
 
status = pqm_create_queue(dbgdev->pqm, dbgdev->dev, NULL,
-   , , NULL, NULL, NULL);
+   , , NULL, NULL, NULL, NULL);
 
if (status) {
pr_err("Failed to create DIQ\n");
diff --git a/drivers/gpu/drm/amd/amdkfd/kfd_device_queue_manager.c 
b/drivers/gpu/drm/amd/amdkfd/kfd_device_queue_manager.c
index 42933610d4e1..63b3c7af681b 100644
--- a/drivers/gpu/drm/amd/amdkfd/kfd_device_queue_manager.c
+++ b/drivers/gpu/drm/amd/amdkfd/kfd_device_queue_manager.c
@@ -323,7 +323,7 @@ static int create_queue_nocpsch(struct device_queue_manager 
*dqm,
struct queue *q,
struct qcm_process_device *qpd,
const struct kfd_criu_queue_priv_data *qd,
-   const void *restore_mqd)
+   const void *restore_mqd, const void 
*restore_ctl_stack)
 {
struct mqd_manager *mqd_mgr;
int retval;
@@ -385,7 +385,8 @@ static int create_queue_nocpsch(struct device_queue_manager 
*dqm,
 
if (qd)
mqd_mgr->restore_mqd(mqd_mgr, >mqd, q->mqd_mem_obj, 
>gart_mqd_addr,
->properties, restore_mqd);
+>properties, restore_mqd, 
restore_ctl_stack,
+qd->ctl_stack_size);
else
mqd_mgr->init_mqd(mqd_mgr, >mqd, q->mqd_mem_obj,
>gart_mqd_addr, >properties);
@@ -1342,7 +1343,7 @@ static void destroy_kernel_queue_cpsch(struct 
device_queue_manager *dqm,
 static int create_queue_cpsch(struct device_queue_manager *dqm, struct queue 
*q,
struct qcm_process_device *qpd,
const struct kfd_criu_queue_priv_data *qd,
-   const void *restore_mqd)
+   const void *restore_mqd, const void *restore_ctl_stack)
 {
int retval;
struct mqd_manager *mqd_mgr;
@@ -1391,7 +1392,8 @@ static int create_queue_cpsch(struct device_queue_manager 
*dqm, struct queue *q,
 
if (qd)
mqd_mgr->restore_mqd(mqd_mgr, >mqd, q->mqd_mem_obj, 
>gart_mqd_addr,
->properties, restore_mqd);
+>properties, restore_mqd, 
restore_ctl_stack,
+qd->ctl_stack_size);
else
mqd_mgr->init_mqd(mqd_mgr, >mqd, q->mqd_mem_obj,
>gart_mqd_addr, >properties);
@@ -1799,7 +1801,8 @@ static int get_wave_state(struct device_queue_manager 
*dqm,
 
 static void get_queue_checkpoint_info(struct device_queue_manager *dqm,
const struct queue *q,
-   u32 *mqd_size)
+   u32 *mqd_size,
+   u32 *ctl_stack_size)
 {
struct mqd_manager *mqd_mgr;
enum KFD_MQD_TYPE mqd_type =
@@ -1808,13 +1811,18 @@ static void 

[Patch v5 18/24] drm/amdkfd: CRIU allow external mm for svm ranges

2022-02-03 Thread Rajneesh Bhardwaj
Both svm_range_get_attr and svm_range_set_attr helpers use mm struct
from current but for a Checkpoint or Restore operation, the current->mm
will fetch the mm for the CRIU master process. So modify these helpers to
accept the task mm for a target kfd process to support Checkpoint
Restore.

Signed-off-by: Rajneesh Bhardwaj 
---
 drivers/gpu/drm/amd/amdkfd/kfd_svm.c | 17 +
 1 file changed, 9 insertions(+), 8 deletions(-)

diff --git a/drivers/gpu/drm/amd/amdkfd/kfd_svm.c 
b/drivers/gpu/drm/amd/amdkfd/kfd_svm.c
index ffec25e642e2..d34508f5e88b 100644
--- a/drivers/gpu/drm/amd/amdkfd/kfd_svm.c
+++ b/drivers/gpu/drm/amd/amdkfd/kfd_svm.c
@@ -3203,10 +3203,10 @@ static void svm_range_evict_svm_bo_worker(struct 
work_struct *work)
 }
 
 static int
-svm_range_set_attr(struct kfd_process *p, uint64_t start, uint64_t size,
-  uint32_t nattr, struct kfd_ioctl_svm_attribute *attrs)
+svm_range_set_attr(struct kfd_process *p, struct mm_struct *mm,
+  uint64_t start, uint64_t size, uint32_t nattr,
+  struct kfd_ioctl_svm_attribute *attrs)
 {
-   struct mm_struct *mm = current->mm;
struct list_head update_list;
struct list_head insert_list;
struct list_head remove_list;
@@ -3305,8 +3305,9 @@ svm_range_set_attr(struct kfd_process *p, uint64_t start, 
uint64_t size,
 }
 
 static int
-svm_range_get_attr(struct kfd_process *p, uint64_t start, uint64_t size,
-  uint32_t nattr, struct kfd_ioctl_svm_attribute *attrs)
+svm_range_get_attr(struct kfd_process *p, struct mm_struct *mm,
+  uint64_t start, uint64_t size, uint32_t nattr,
+  struct kfd_ioctl_svm_attribute *attrs)
 {
DECLARE_BITMAP(bitmap_access, MAX_GPU_INSTANCE);
DECLARE_BITMAP(bitmap_aip, MAX_GPU_INSTANCE);
@@ -3316,7 +3317,6 @@ svm_range_get_attr(struct kfd_process *p, uint64_t start, 
uint64_t size,
bool get_accessible = false;
bool get_flags = false;
uint64_t last = start + size - 1UL;
-   struct mm_struct *mm = current->mm;
uint8_t granularity = 0xff;
struct interval_tree_node *node;
struct svm_range_list *svms;
@@ -3485,6 +3485,7 @@ int
 svm_ioctl(struct kfd_process *p, enum kfd_ioctl_svm_op op, uint64_t start,
  uint64_t size, uint32_t nattrs, struct kfd_ioctl_svm_attribute *attrs)
 {
+   struct mm_struct *mm = current->mm;
int r;
 
start >>= PAGE_SHIFT;
@@ -3492,10 +3493,10 @@ svm_ioctl(struct kfd_process *p, enum kfd_ioctl_svm_op 
op, uint64_t start,
 
switch (op) {
case KFD_IOCTL_SVM_OP_SET_ATTR:
-   r = svm_range_set_attr(p, start, size, nattrs, attrs);
+   r = svm_range_set_attr(p, mm, start, size, nattrs, attrs);
break;
case KFD_IOCTL_SVM_OP_GET_ATTR:
-   r = svm_range_get_attr(p, start, size, nattrs, attrs);
+   r = svm_range_get_attr(p, mm, start, size, nattrs, attrs);
break;
default:
r = EINVAL;
-- 
2.17.1



[Patch v5 15/24] drm/amdkfd: CRIU implement gpu_id remapping

2022-02-03 Thread Rajneesh Bhardwaj
From: David Yat Sin 

When doing a restore on a different node, the gpu_id's on the restore
node may be different. But the user space application will still refer
use the original gpu_id's in the ioctl calls. Adding code to create a
gpu id mapping so that kfd can determine actual gpu_id during the user
ioctl's.

Signed-off-by: David Yat Sin 
Signed-off-by: Rajneesh Bhardwaj 
---
 drivers/gpu/drm/amd/amdkfd/kfd_chardev.c  | 468 --
 drivers/gpu/drm/amd/amdkfd/kfd_events.c   |  45 +-
 drivers/gpu/drm/amd/amdkfd/kfd_priv.h |  11 +
 drivers/gpu/drm/amd/amdkfd/kfd_process.c  |  32 ++
 .../amd/amdkfd/kfd_process_queue_manager.c|  18 +-
 5 files changed, 414 insertions(+), 160 deletions(-)

diff --git a/drivers/gpu/drm/amd/amdkfd/kfd_chardev.c 
b/drivers/gpu/drm/amd/amdkfd/kfd_chardev.c
index a4be758647f9..69edeaf3893e 100644
--- a/drivers/gpu/drm/amd/amdkfd/kfd_chardev.c
+++ b/drivers/gpu/drm/amd/amdkfd/kfd_chardev.c
@@ -293,14 +293,17 @@ static int kfd_ioctl_create_queue(struct file *filep, 
struct kfd_process *p,
return err;
 
pr_debug("Looking for gpu id 0x%x\n", args->gpu_id);
-   dev = kfd_device_by_id(args->gpu_id);
-   if (!dev) {
-   pr_debug("Could not find gpu id 0x%x\n", args->gpu_id);
-   return -EINVAL;
-   }
 
mutex_lock(>mutex);
 
+   pdd = kfd_process_device_data_by_id(p, args->gpu_id);
+   if (!pdd) {
+   pr_debug("Could not find gpu id 0x%x\n", args->gpu_id);
+   err = -EINVAL;
+   goto err_pdd;
+   }
+   dev = pdd->dev;
+
pdd = kfd_bind_process_to_device(dev, p);
if (IS_ERR(pdd)) {
err = -ESRCH;
@@ -345,6 +348,7 @@ static int kfd_ioctl_create_queue(struct file *filep, 
struct kfd_process *p,
 
 err_create_queue:
 err_bind_process:
+err_pdd:
mutex_unlock(>mutex);
return err;
 }
@@ -491,7 +495,6 @@ static int kfd_ioctl_set_memory_policy(struct file *filep,
struct kfd_process *p, void *data)
 {
struct kfd_ioctl_set_memory_policy_args *args = data;
-   struct kfd_dev *dev;
int err = 0;
struct kfd_process_device *pdd;
enum cache_policy default_policy, alternate_policy;
@@ -506,13 +509,15 @@ static int kfd_ioctl_set_memory_policy(struct file *filep,
return -EINVAL;
}
 
-   dev = kfd_device_by_id(args->gpu_id);
-   if (!dev)
-   return -EINVAL;
-
mutex_lock(>mutex);
+   pdd = kfd_process_device_data_by_id(p, args->gpu_id);
+   if (!pdd) {
+   pr_debug("Could not find gpu id 0x%x\n", args->gpu_id);
+   err = -EINVAL;
+   goto err_pdd;
+   }
 
-   pdd = kfd_bind_process_to_device(dev, p);
+   pdd = kfd_bind_process_to_device(pdd->dev, p);
if (IS_ERR(pdd)) {
err = -ESRCH;
goto out;
@@ -525,7 +530,7 @@ static int kfd_ioctl_set_memory_policy(struct file *filep,
(args->alternate_policy == KFD_IOC_CACHE_POLICY_COHERENT)
   ? cache_policy_coherent : cache_policy_noncoherent;
 
-   if (!dev->dqm->ops.set_cache_memory_policy(dev->dqm,
+   if (!pdd->dev->dqm->ops.set_cache_memory_policy(pdd->dev->dqm,
>qpd,
default_policy,
alternate_policy,
@@ -534,6 +539,7 @@ static int kfd_ioctl_set_memory_policy(struct file *filep,
err = -EINVAL;
 
 out:
+err_pdd:
mutex_unlock(>mutex);
 
return err;
@@ -543,17 +549,18 @@ static int kfd_ioctl_set_trap_handler(struct file *filep,
struct kfd_process *p, void *data)
 {
struct kfd_ioctl_set_trap_handler_args *args = data;
-   struct kfd_dev *dev;
int err = 0;
struct kfd_process_device *pdd;
 
-   dev = kfd_device_by_id(args->gpu_id);
-   if (!dev)
-   return -EINVAL;
-
mutex_lock(>mutex);
 
-   pdd = kfd_bind_process_to_device(dev, p);
+   pdd = kfd_process_device_data_by_id(p, args->gpu_id);
+   if (!pdd) {
+   err = -EINVAL;
+   goto err_pdd;
+   }
+
+   pdd = kfd_bind_process_to_device(pdd->dev, p);
if (IS_ERR(pdd)) {
err = -ESRCH;
goto out;
@@ -562,6 +569,7 @@ static int kfd_ioctl_set_trap_handler(struct file *filep,
kfd_process_set_trap_handler(>qpd, args->tba_addr, args->tma_addr);
 
 out:
+err_pdd:
mutex_unlock(>mutex);
 
return err;
@@ -577,16 +585,20 @@ static int kfd_ioctl_dbg_register(struct file *filep,
bool create_ok;
long status = 0;
 
-   dev = kfd_device_by_id(args->gpu_id);
-   if (!dev)
-   return -EINVAL;
+   mutex_lock(>mutex);
+   pdd = kfd_process_device_data_by_id(p, args->gpu_id);
+   if (!pdd) {
+   status = 

[Patch v5 19/24] drm/amdkfd: use user_gpu_id for svm ranges

2022-02-03 Thread Rajneesh Bhardwaj
Currently the SVM ranges use actual_gpu_id but with Checkpoint Restore
support its possible that the SVM ranges can be resumed on another node
where the actual_gpu_id may not be same as the original (user_gpu_id)
gpu id. So modify svm code to use user_gpu_id.

Signed-off-by: Rajneesh Bhardwaj 
---
 drivers/gpu/drm/amd/amdkfd/kfd_process.c | 4 ++--
 1 file changed, 2 insertions(+), 2 deletions(-)

diff --git a/drivers/gpu/drm/amd/amdkfd/kfd_process.c 
b/drivers/gpu/drm/amd/amdkfd/kfd_process.c
index 06e6e9180fbc..8e2780d2f735 100644
--- a/drivers/gpu/drm/amd/amdkfd/kfd_process.c
+++ b/drivers/gpu/drm/amd/amdkfd/kfd_process.c
@@ -1797,7 +1797,7 @@ int kfd_process_gpuidx_from_gpuid(struct kfd_process *p, 
uint32_t gpu_id)
int i;
 
for (i = 0; i < p->n_pdds; i++)
-   if (p->pdds[i] && gpu_id == p->pdds[i]->dev->id)
+   if (p->pdds[i] && gpu_id == p->pdds[i]->user_gpu_id)
return i;
return -EINVAL;
 }
@@ -1810,7 +1810,7 @@ kfd_process_gpuid_from_adev(struct kfd_process *p, struct 
amdgpu_device *adev,
 
for (i = 0; i < p->n_pdds; i++)
if (p->pdds[i] && p->pdds[i]->dev->adev == adev) {
-   *gpuid = p->pdds[i]->dev->id;
+   *gpuid = p->pdds[i]->user_gpu_id;
*gpuidx = i;
return 0;
}
-- 
2.17.1



[Patch v5 14/24] drm/amdkfd: CRIU checkpoint and restore events

2022-02-03 Thread Rajneesh Bhardwaj
From: David Yat Sin 

Add support to existing CRIU ioctl's to save and restore events during
criu checkpoint and restore.

Signed-off-by: David Yat Sin 
Signed-off-by: Rajneesh Bhardwaj 
---
 drivers/gpu/drm/amd/amdkfd/kfd_chardev.c |  70 +-
 drivers/gpu/drm/amd/amdkfd/kfd_events.c  | 272 ---
 drivers/gpu/drm/amd/amdkfd/kfd_priv.h|  27 ++-
 3 files changed, 280 insertions(+), 89 deletions(-)

diff --git a/drivers/gpu/drm/amd/amdkfd/kfd_chardev.c 
b/drivers/gpu/drm/amd/amdkfd/kfd_chardev.c
index 608214ea634d..a4be758647f9 100644
--- a/drivers/gpu/drm/amd/amdkfd/kfd_chardev.c
+++ b/drivers/gpu/drm/amd/amdkfd/kfd_chardev.c
@@ -1008,57 +1008,11 @@ static int kfd_ioctl_create_event(struct file *filp, 
struct kfd_process *p,
 * through the event_page_offset field.
 */
if (args->event_page_offset) {
-   struct kfd_dev *kfd;
-   struct kfd_process_device *pdd;
-   void *mem, *kern_addr;
-   uint64_t size;
-
-   kfd = kfd_device_by_id(GET_GPU_ID(args->event_page_offset));
-   if (!kfd) {
-   pr_err("Getting device by id failed in %s\n", __func__);
-   return -EINVAL;
-   }
-
mutex_lock(>mutex);
-
-   if (p->signal_page) {
-   pr_err("Event page is already set\n");
-   err = -EINVAL;
-   goto out_unlock;
-   }
-
-   pdd = kfd_bind_process_to_device(kfd, p);
-   if (IS_ERR(pdd)) {
-   err = PTR_ERR(pdd);
-   goto out_unlock;
-   }
-
-   mem = kfd_process_device_translate_handle(pdd,
-   GET_IDR_HANDLE(args->event_page_offset));
-   if (!mem) {
-   pr_err("Can't find BO, offset is 0x%llx\n",
-  args->event_page_offset);
-   err = -EINVAL;
-   goto out_unlock;
-   }
-
-   err = amdgpu_amdkfd_gpuvm_map_gtt_bo_to_kernel(kfd->adev,
-   mem, _addr, );
-   if (err) {
-   pr_err("Failed to map event page to kernel\n");
-   goto out_unlock;
-   }
-
-   err = kfd_event_page_set(p, kern_addr, size);
-   if (err) {
-   pr_err("Failed to set event page\n");
-   amdgpu_amdkfd_gpuvm_unmap_gtt_bo_from_kernel(kfd->adev, 
mem);
-   goto out_unlock;
-   }
-
-   p->signal_handle = args->event_page_offset;
-
+   err = kfd_kmap_event_page(p, args->event_page_offset);
mutex_unlock(>mutex);
+   if (err)
+   return err;
}
 
err = kfd_event_create(filp, p, args->event_type,
@@ -1067,10 +1021,7 @@ static int kfd_ioctl_create_event(struct file *filp, 
struct kfd_process *p,
>event_page_offset,
>event_slot_index);
 
-   return err;
-
-out_unlock:
-   mutex_unlock(>mutex);
+   pr_debug("Created event (id:0x%08x) (%s)\n", args->event_id, __func__);
return err;
 }
 
@@ -2031,7 +1982,7 @@ static int criu_get_process_object_info(struct 
kfd_process *p,
if (ret)
return ret;
 
-   num_events = 0; /* TODO: Implement Events */
+   num_events = kfd_get_num_events(p);
num_svm_ranges = 0; /* TODO: Implement SVM-Ranges */
 
*num_objects = num_queues + num_events + num_svm_ranges;
@@ -2040,7 +1991,7 @@ static int criu_get_process_object_info(struct 
kfd_process *p,
priv_size = sizeof(struct kfd_criu_process_priv_data);
priv_size += *num_bos * sizeof(struct kfd_criu_bo_priv_data);
priv_size += queues_priv_data_size;
-   /* TODO: Add Events priv size */
+   priv_size += num_events * sizeof(struct 
kfd_criu_event_priv_data);
/* TODO: Add SVM ranges priv size */
*objs_priv_size = priv_size;
}
@@ -2102,7 +2053,10 @@ static int criu_checkpoint(struct file *filep,
if (ret)
goto exit_unlock;
 
-   /* TODO: Dump Events */
+   ret = kfd_criu_checkpoint_events(p, (uint8_t __user 
*)args->priv_data,
+_offset);
+   if (ret)
+   goto exit_unlock;
 
/* TODO: Dump SVM-Ranges */
}
@@ -2410,8 +2364,8 @@ static int criu_restore_objects(struct file *filep,
goto exit;
break;
case KFD_CRIU_OBJECT_TYPE_EVENT:
-   /* TODO: Implement Events */
-   *priv_offset += 

[Patch v5 24/24] drm/amdkfd: Bump up KFD API version for CRIU

2022-02-03 Thread Rajneesh Bhardwaj
 - Change KFD minor version to 7 for CRIU

Proposed userspace changes:
https://github.com/RadeonOpenCompute/criu

Signed-off-by: Rajneesh Bhardwaj 
---
 include/uapi/linux/kfd_ioctl.h | 3 ++-
 1 file changed, 2 insertions(+), 1 deletion(-)

diff --git a/include/uapi/linux/kfd_ioctl.h b/include/uapi/linux/kfd_ioctl.h
index 49429a6c42fc..e6a56c146920 100644
--- a/include/uapi/linux/kfd_ioctl.h
+++ b/include/uapi/linux/kfd_ioctl.h
@@ -32,9 +32,10 @@
  * - 1.4 - Indicate new SRAM EDC bit in device properties
  * - 1.5 - Add SVM API
  * - 1.6 - Query clear flags in SVM get_attr API
+ * - 1.7 - Checkpoint Restore (CRIU) API
  */
 #define KFD_IOCTL_MAJOR_VERSION 1
-#define KFD_IOCTL_MINOR_VERSION 6
+#define KFD_IOCTL_MINOR_VERSION 7
 
 struct kfd_ioctl_get_version_args {
__u32 major_version;/* from KFD */
-- 
2.17.1



[Patch v5 16/24] drm/amdkfd: CRIU export BOs as prime dmabuf objects

2022-02-03 Thread Rajneesh Bhardwaj
KFD buffer objects do not associate a GEM handle with them so cannot
directly be used with libdrm to initiate a system dma (sDMA) operation
to speedup the checkpoint and restore operation so export them as dmabuf
objects and use with libdrm helper (amdgpu_bo_import) to further process
the sdma command submissions.

With sDMA, we see huge improvement in checkpoint and restore operations
compared to the generic pci based access via host data path.

Suggested-by: Felix Kuehling 
Signed-off-by: Rajneesh Bhardwaj 
Signed-off-by: David Yat Sin 
---
 drivers/gpu/drm/amd/amdkfd/kfd_chardev.c | 71 +++-
 1 file changed, 69 insertions(+), 2 deletions(-)

diff --git a/drivers/gpu/drm/amd/amdkfd/kfd_chardev.c 
b/drivers/gpu/drm/amd/amdkfd/kfd_chardev.c
index 69edeaf3893e..ab5107a3fe36 100644
--- a/drivers/gpu/drm/amd/amdkfd/kfd_chardev.c
+++ b/drivers/gpu/drm/amd/amdkfd/kfd_chardev.c
@@ -35,6 +35,7 @@
 #include 
 #include 
 #include 
+#include 
 #include 
 #include "kfd_priv.h"
 #include "kfd_device_queue_manager.h"
@@ -42,6 +43,7 @@
 #include "kfd_svm.h"
 #include "amdgpu_amdkfd.h"
 #include "kfd_smi_events.h"
+#include "amdgpu_dma_buf.h"
 
 static long kfd_ioctl(struct file *, unsigned int, unsigned long);
 static int kfd_open(struct inode *, struct file *);
@@ -1936,6 +1938,33 @@ uint32_t get_process_num_bos(struct kfd_process *p)
return num_of_bos;
 }
 
+static int criu_get_prime_handle(struct drm_gem_object *gobj, int flags,
+ u32 *shared_fd)
+{
+   struct dma_buf *dmabuf;
+   int ret;
+
+   dmabuf = amdgpu_gem_prime_export(gobj, flags);
+   if (IS_ERR(dmabuf)) {
+   ret = PTR_ERR(dmabuf);
+   pr_err("dmabuf export failed for the BO\n");
+   return ret;
+   }
+
+   ret = dma_buf_fd(dmabuf, flags);
+   if (ret < 0) {
+   pr_err("dmabuf create fd failed, ret:%d\n", ret);
+   goto out_free_dmabuf;
+   }
+
+   *shared_fd = ret;
+   return 0;
+
+out_free_dmabuf:
+   dma_buf_put(dmabuf);
+   return ret;
+}
+
 static int criu_checkpoint_bos(struct kfd_process *p,
   uint32_t num_bos,
   uint8_t __user *user_bos,
@@ -1997,6 +2026,14 @@ static int criu_checkpoint_bos(struct kfd_process *p,
goto exit;
}
}
+   if (bo_bucket->alloc_flags & 
KFD_IOC_ALLOC_MEM_FLAGS_VRAM) {
+   ret = 
criu_get_prime_handle(_bo->tbo.base,
+   bo_bucket->alloc_flags &
+   
KFD_IOC_ALLOC_MEM_FLAGS_WRITABLE ? DRM_RDWR : 0,
+   _bucket->dmabuf_fd);
+   if (ret)
+   goto exit;
+   }
if (bo_bucket->alloc_flags & 
KFD_IOC_ALLOC_MEM_FLAGS_DOORBELL)
bo_bucket->offset = KFD_MMAP_TYPE_DOORBELL |
KFD_MMAP_GPU_ID(pdd->dev->id);
@@ -2041,6 +2078,10 @@ static int criu_checkpoint_bos(struct kfd_process *p,
*priv_offset += num_bos * sizeof(*bo_privs);
 
 exit:
+   while (ret && bo_index--) {
+   if (bo_buckets[bo_index].alloc_flags & 
KFD_IOC_ALLOC_MEM_FLAGS_VRAM)
+   close_fd(bo_buckets[bo_index].dmabuf_fd);
+   }
 
kvfree(bo_buckets);
kvfree(bo_privs);
@@ -2141,16 +2182,28 @@ static int criu_checkpoint(struct file *filep,
ret = kfd_criu_checkpoint_queues(p, (uint8_t __user 
*)args->priv_data,
 _offset);
if (ret)
-   goto exit_unlock;
+   goto close_bo_fds;
 
ret = kfd_criu_checkpoint_events(p, (uint8_t __user 
*)args->priv_data,
 _offset);
if (ret)
-   goto exit_unlock;
+   goto close_bo_fds;
 
/* TODO: Dump SVM-Ranges */
}
 
+close_bo_fds:
+   if (ret) {
+   /* If IOCTL returns err, user assumes all FDs opened in 
criu_dump_bos are closed */
+   uint32_t i;
+   struct kfd_criu_bo_bucket *bo_buckets = (struct 
kfd_criu_bo_bucket *) args->bos;
+
+   for (i = 0; i < num_bos; i++) {
+   if (bo_buckets[i].alloc_flags & 
KFD_IOC_ALLOC_MEM_FLAGS_VRAM)
+   close_fd(bo_buckets[i].dmabuf_fd);
+   }
+   }
+
 exit_unlock:
mutex_unlock(>mutex);
if (ret)
@@ -2345,6 +2398,7 @@ static int criu_restore_bos(struct kfd_process *p,
struct kfd_criu_bo_priv_data *bo_priv;
struct kfd_dev *dev;
struct 

[Patch v5 12/24] drm/amdkfd: CRIU checkpoint and restore queue mqds

2022-02-03 Thread Rajneesh Bhardwaj
From: David Yat Sin 

Checkpoint contents of queue MQD's on CRIU dump and restore them during
CRIU restore.

Signed-off-by: David Yat Sin 
Signed-off-by: Rajneesh Bhardwaj 
---
 drivers/gpu/drm/amd/amdkfd/kfd_chardev.c  |   2 +-
 drivers/gpu/drm/amd/amdkfd/kfd_dbgdev.c   |   2 +-
 .../drm/amd/amdkfd/kfd_device_queue_manager.c |  73 +++-
 .../drm/amd/amdkfd/kfd_device_queue_manager.h |  12 +-
 drivers/gpu/drm/amd/amdkfd/kfd_mqd_manager.h  |   7 +
 .../gpu/drm/amd/amdkfd/kfd_mqd_manager_cik.c  |  70 
 .../gpu/drm/amd/amdkfd/kfd_mqd_manager_v10.c  |  71 
 .../gpu/drm/amd/amdkfd/kfd_mqd_manager_v9.c   |  71 
 .../gpu/drm/amd/amdkfd/kfd_mqd_manager_vi.c   |  72 
 drivers/gpu/drm/amd/amdkfd/kfd_priv.h |   5 +
 .../amd/amdkfd/kfd_process_queue_manager.c| 157 --
 11 files changed, 516 insertions(+), 26 deletions(-)

diff --git a/drivers/gpu/drm/amd/amdkfd/kfd_chardev.c 
b/drivers/gpu/drm/amd/amdkfd/kfd_chardev.c
index d35911550792..999672602252 100644
--- a/drivers/gpu/drm/amd/amdkfd/kfd_chardev.c
+++ b/drivers/gpu/drm/amd/amdkfd/kfd_chardev.c
@@ -311,7 +311,7 @@ static int kfd_ioctl_create_queue(struct file *filep, 
struct kfd_process *p,
p->pasid,
dev->id);
 
-   err = pqm_create_queue(>pqm, dev, filep, _properties, _id, 
NULL,
+   err = pqm_create_queue(>pqm, dev, filep, _properties, _id, 
NULL, NULL,
_offset_in_process);
if (err != 0)
goto err_create_queue;
diff --git a/drivers/gpu/drm/amd/amdkfd/kfd_dbgdev.c 
b/drivers/gpu/drm/amd/amdkfd/kfd_dbgdev.c
index 0c50e67e2b51..3a5303ebcabf 100644
--- a/drivers/gpu/drm/amd/amdkfd/kfd_dbgdev.c
+++ b/drivers/gpu/drm/amd/amdkfd/kfd_dbgdev.c
@@ -185,7 +185,7 @@ static int dbgdev_register_diq(struct kfd_dbgdev *dbgdev)
properties.type = KFD_QUEUE_TYPE_DIQ;
 
status = pqm_create_queue(dbgdev->pqm, dbgdev->dev, NULL,
-   , , NULL, NULL);
+   , , NULL, NULL, NULL);
 
if (status) {
pr_err("Failed to create DIQ\n");
diff --git a/drivers/gpu/drm/amd/amdkfd/kfd_device_queue_manager.c 
b/drivers/gpu/drm/amd/amdkfd/kfd_device_queue_manager.c
index 13317d2c8959..42933610d4e1 100644
--- a/drivers/gpu/drm/amd/amdkfd/kfd_device_queue_manager.c
+++ b/drivers/gpu/drm/amd/amdkfd/kfd_device_queue_manager.c
@@ -322,7 +322,8 @@ static void deallocate_vmid(struct device_queue_manager 
*dqm,
 static int create_queue_nocpsch(struct device_queue_manager *dqm,
struct queue *q,
struct qcm_process_device *qpd,
-   const struct kfd_criu_queue_priv_data *qd)
+   const struct kfd_criu_queue_priv_data *qd,
+   const void *restore_mqd)
 {
struct mqd_manager *mqd_mgr;
int retval;
@@ -381,8 +382,14 @@ static int create_queue_nocpsch(struct 
device_queue_manager *dqm,
retval = -ENOMEM;
goto out_deallocate_doorbell;
}
-   mqd_mgr->init_mqd(mqd_mgr, >mqd, q->mqd_mem_obj,
-   >gart_mqd_addr, >properties);
+
+   if (qd)
+   mqd_mgr->restore_mqd(mqd_mgr, >mqd, q->mqd_mem_obj, 
>gart_mqd_addr,
+>properties, restore_mqd);
+   else
+   mqd_mgr->init_mqd(mqd_mgr, >mqd, q->mqd_mem_obj,
+   >gart_mqd_addr, >properties);
+
if (q->properties.is_active) {
if (!dqm->sched_running) {
WARN_ONCE(1, "Load non-HWS mqd while stopped\n");
@@ -1334,7 +1341,8 @@ static void destroy_kernel_queue_cpsch(struct 
device_queue_manager *dqm,
 
 static int create_queue_cpsch(struct device_queue_manager *dqm, struct queue 
*q,
struct qcm_process_device *qpd,
-   const struct kfd_criu_queue_priv_data *qd)
+   const struct kfd_criu_queue_priv_data *qd,
+   const void *restore_mqd)
 {
int retval;
struct mqd_manager *mqd_mgr;
@@ -1380,8 +1388,13 @@ static int create_queue_cpsch(struct 
device_queue_manager *dqm, struct queue *q,
 * updates the is_evicted flag but is a no-op otherwise.
 */
q->properties.is_evicted = !!qpd->evicted;
-   mqd_mgr->init_mqd(mqd_mgr, >mqd, q->mqd_mem_obj,
-   >gart_mqd_addr, >properties);
+
+   if (qd)
+   mqd_mgr->restore_mqd(mqd_mgr, >mqd, q->mqd_mem_obj, 
>gart_mqd_addr,
+>properties, restore_mqd);
+   else
+   mqd_mgr->init_mqd(mqd_mgr, >mqd, q->mqd_mem_obj,
+   >gart_mqd_addr, >properties);
 
list_add(>list, >queues_list);
qpd->queue_count++;
@@ -1784,6 +1797,50 @@ static int 

[Patch v5 22/24] drm/amdkfd: CRIU prepare for svm resume

2022-02-03 Thread Rajneesh Bhardwaj
During CRIU restore phase, the VMAs for the virtual address ranges are
not at their final location yet so in this stage, only cache the data
required to successfully resume the svm ranges during an imminent CRIU
resume phase.

Signed-off-by: Rajneesh Bhardwaj 
---
 drivers/gpu/drm/amd/amdkfd/kfd_chardev.c |  4 +-
 drivers/gpu/drm/amd/amdkfd/kfd_priv.h|  1 +
 drivers/gpu/drm/amd/amdkfd/kfd_svm.c | 58 
 drivers/gpu/drm/amd/amdkfd/kfd_svm.h | 12 +
 4 files changed, 73 insertions(+), 2 deletions(-)

diff --git a/drivers/gpu/drm/amd/amdkfd/kfd_chardev.c 
b/drivers/gpu/drm/amd/amdkfd/kfd_chardev.c
index 721c86ceba22..c143f242a84d 100644
--- a/drivers/gpu/drm/amd/amdkfd/kfd_chardev.c
+++ b/drivers/gpu/drm/amd/amdkfd/kfd_chardev.c
@@ -2643,8 +2643,8 @@ static int criu_restore_objects(struct file *filep,
goto exit;
break;
case KFD_CRIU_OBJECT_TYPE_SVM_RANGE:
-   /* TODO: Implement SVM range */
-   *priv_offset += sizeof(struct 
kfd_criu_svm_range_priv_data);
+   ret = kfd_criu_restore_svm(p, (uint8_t __user 
*)args->priv_data,
+priv_offset, 
max_priv_data_size);
if (ret)
goto exit;
break;
diff --git a/drivers/gpu/drm/amd/amdkfd/kfd_priv.h 
b/drivers/gpu/drm/amd/amdkfd/kfd_priv.h
index 715dd0d4fac5..74ff4132a163 100644
--- a/drivers/gpu/drm/amd/amdkfd/kfd_priv.h
+++ b/drivers/gpu/drm/amd/amdkfd/kfd_priv.h
@@ -790,6 +790,7 @@ struct svm_range_list {
struct list_headlist;
struct work_struct  deferred_list_work;
struct list_headdeferred_range_list;
+   struct list_headcriu_svm_metadata_list;
spinlock_t  deferred_list_lock;
atomic_tevicted_ranges;
atomic_tdrain_pagefaults;
diff --git a/drivers/gpu/drm/amd/amdkfd/kfd_svm.c 
b/drivers/gpu/drm/amd/amdkfd/kfd_svm.c
index 7cf63995c079..41ac049b3316 100644
--- a/drivers/gpu/drm/amd/amdkfd/kfd_svm.c
+++ b/drivers/gpu/drm/amd/amdkfd/kfd_svm.c
@@ -45,6 +45,11 @@
  */
 #define AMDGPU_SVM_RANGE_RETRY_FAULT_PENDING   2000
 
+struct criu_svm_metadata {
+   struct list_head list;
+   struct kfd_criu_svm_range_priv_data data;
+};
+
 static void svm_range_evict_svm_bo_worker(struct work_struct *work);
 static bool
 svm_range_cpu_invalidate_pagetables(struct mmu_interval_notifier *mni,
@@ -2875,6 +2880,7 @@ int svm_range_list_init(struct kfd_process *p)
INIT_DELAYED_WORK(>restore_work, svm_range_restore_work);
INIT_WORK(>deferred_list_work, svm_range_deferred_list_work);
INIT_LIST_HEAD(>deferred_range_list);
+   INIT_LIST_HEAD(>criu_svm_metadata_list);
spin_lock_init(>deferred_list_lock);
 
for (i = 0; i < p->n_pdds; i++)
@@ -3481,6 +3487,58 @@ svm_range_get_attr(struct kfd_process *p, struct 
mm_struct *mm,
return 0;
 }
 
+int kfd_criu_restore_svm(struct kfd_process *p,
+uint8_t __user *user_priv_ptr,
+uint64_t *priv_data_offset,
+uint64_t max_priv_data_size)
+{
+   uint64_t svm_priv_data_size, svm_object_md_size, svm_attrs_size;
+   int nattr_common = 4, nattr_accessibility = 1;
+   struct criu_svm_metadata *criu_svm_md = NULL;
+   struct svm_range_list *svms = >svms;
+   uint32_t num_devices;
+   int ret = 0;
+
+   num_devices = p->n_pdds;
+   /* Handle one SVM range object at a time, also the number of gpus are
+* assumed to be same on the restore node, checking must be done while
+* evaluating the topology earlier */
+
+   svm_attrs_size = sizeof(struct kfd_ioctl_svm_attribute) *
+   (nattr_common + nattr_accessibility * num_devices);
+   svm_object_md_size = sizeof(struct criu_svm_metadata) + svm_attrs_size;
+
+   svm_priv_data_size = sizeof(struct kfd_criu_svm_range_priv_data) +
+   svm_attrs_size;
+
+   criu_svm_md = kzalloc(svm_object_md_size, GFP_KERNEL);
+   if (!criu_svm_md) {
+   pr_err("failed to allocate memory to store svm metadata\n");
+   return -ENOMEM;
+   }
+   if (*priv_data_offset + svm_priv_data_size > max_priv_data_size) {
+   ret = -EINVAL;
+   goto exit;
+   }
+
+   ret = copy_from_user(_svm_md->data, user_priv_ptr + 
*priv_data_offset,
+svm_priv_data_size);
+   if (ret) {
+   ret = -EFAULT;
+   goto exit;
+   }
+   *priv_data_offset += svm_priv_data_size;
+
+   list_add_tail(_svm_md->list, >criu_svm_metadata_list);
+
+   return 0;
+
+
+exit:
+   kfree(criu_svm_md);
+   return 

[Patch v5 21/24] drm/amdkfd: CRIU Save Shared Virtual Memory ranges

2022-02-03 Thread Rajneesh Bhardwaj
During checkpoint stage, save the shared virtual memory ranges and
attributes for the target process. A process may contain a number of svm
ranges and each range might contain a number of attributes. While not
all attributes may be applicable for a given prange but during
checkpoint we store all possible values for the max possible attribute
types.

Signed-off-by: Rajneesh Bhardwaj 
---
 drivers/gpu/drm/amd/amdkfd/kfd_chardev.c |  4 +-
 drivers/gpu/drm/amd/amdkfd/kfd_svm.c | 95 
 drivers/gpu/drm/amd/amdkfd/kfd_svm.h | 10 +++
 3 files changed, 108 insertions(+), 1 deletion(-)

diff --git a/drivers/gpu/drm/amd/amdkfd/kfd_chardev.c 
b/drivers/gpu/drm/amd/amdkfd/kfd_chardev.c
index a755ea68a428..721c86ceba22 100644
--- a/drivers/gpu/drm/amd/amdkfd/kfd_chardev.c
+++ b/drivers/gpu/drm/amd/amdkfd/kfd_chardev.c
@@ -2196,7 +2196,9 @@ static int criu_checkpoint(struct file *filep,
if (ret)
goto close_bo_fds;
 
-   /* TODO: Dump SVM-Ranges */
+   ret = kfd_criu_checkpoint_svm(p, (uint8_t __user 
*)args->priv_data, _offset);
+   if (ret)
+   goto close_bo_fds;
}
 
 close_bo_fds:
diff --git a/drivers/gpu/drm/amd/amdkfd/kfd_svm.c 
b/drivers/gpu/drm/amd/amdkfd/kfd_svm.c
index 64cd7712c098..7cf63995c079 100644
--- a/drivers/gpu/drm/amd/amdkfd/kfd_svm.c
+++ b/drivers/gpu/drm/amd/amdkfd/kfd_svm.c
@@ -3540,6 +3540,101 @@ int svm_range_get_info(struct kfd_process *p, uint32_t 
*num_svm_ranges,
return 0;
 }
 
+int kfd_criu_checkpoint_svm(struct kfd_process *p,
+   uint8_t __user *user_priv_data,
+   uint64_t *priv_data_offset)
+{
+   struct kfd_criu_svm_range_priv_data *svm_priv = NULL;
+   struct kfd_ioctl_svm_attribute *query_attr = NULL;
+   uint64_t svm_priv_data_size, query_attr_size = 0;
+   int index, nattr_common = 4, ret = 0;
+   struct svm_range_list *svms;
+   int num_devices = p->n_pdds;
+   struct svm_range *prange;
+   struct mm_struct *mm;
+
+   svms = >svms;
+   if (!svms)
+   return -EINVAL;
+
+   mm = get_task_mm(p->lead_thread);
+   if (!mm) {
+   pr_err("failed to get mm for the target process\n");
+   return -ESRCH;
+   }
+
+   query_attr_size = sizeof(struct kfd_ioctl_svm_attribute) *
+   (nattr_common + num_devices);
+
+   query_attr = kzalloc(query_attr_size, GFP_KERNEL);
+   if (!query_attr) {
+   ret = -ENOMEM;
+   goto exit;
+   }
+
+   query_attr[0].type = KFD_IOCTL_SVM_ATTR_PREFERRED_LOC;
+   query_attr[1].type = KFD_IOCTL_SVM_ATTR_PREFETCH_LOC;
+   query_attr[2].type = KFD_IOCTL_SVM_ATTR_SET_FLAGS;
+   query_attr[3].type = KFD_IOCTL_SVM_ATTR_GRANULARITY;
+
+   for (index = 0; index < num_devices; index++) {
+   struct kfd_process_device *pdd = p->pdds[index];
+
+   query_attr[index + nattr_common].type =
+   KFD_IOCTL_SVM_ATTR_ACCESS;
+   query_attr[index + nattr_common].value = pdd->user_gpu_id;
+   }
+
+   svm_priv_data_size = sizeof(*svm_priv) + query_attr_size;
+
+   svm_priv = kzalloc(svm_priv_data_size, GFP_KERNEL);
+   if (!svm_priv) {
+   ret = -ENOMEM;
+   goto exit_query;
+   }
+
+   index = 0;
+   list_for_each_entry(prange, >list, list) {
+
+   svm_priv->object_type = KFD_CRIU_OBJECT_TYPE_SVM_RANGE;
+   svm_priv->start_addr = prange->start;
+   svm_priv->size = prange->npages;
+   memcpy(_priv->attrs, query_attr, query_attr_size);
+   pr_debug("CRIU: prange: 0x%p start: 0x%lx\t npages: 0x%llx end: 
0x%llx\t size: 0x%llx\n",
+prange, prange->start, prange->npages,
+prange->start + prange->npages - 1,
+prange->npages * PAGE_SIZE);
+
+   ret = svm_range_get_attr(p, mm, svm_priv->start_addr,
+svm_priv->size,
+(nattr_common + num_devices),
+svm_priv->attrs);
+   if (ret) {
+   pr_err("CRIU: failed to obtain range attributes\n");
+   goto exit_priv;
+   }
+
+   ret = copy_to_user(user_priv_data + *priv_data_offset,
+  svm_priv, svm_priv_data_size);
+   if (ret) {
+   pr_err("Failed to copy svm priv to user\n");
+   goto exit_priv;
+   }
+
+   *priv_data_offset += svm_priv_data_size;
+
+   }
+
+
+exit_priv:
+   kfree(svm_priv);
+exit_query:
+   kfree(query_attr);
+exit:
+   mmput(mm);
+   return ret;
+}
+
 int
 svm_ioctl(struct kfd_process *p, enum kfd_ioctl_svm_op 

[Patch v5 17/24] drm/amdkfd: CRIU checkpoint and restore xnack mode

2022-02-03 Thread Rajneesh Bhardwaj
Recoverable page faults are represented by the xnack mode setting inside
a kfd process and are used to represent the device page faults. For CR,
we don't consider negative values which are typically used for querying
the current xnack mode without modifying it.

Signed-off-by: Rajneesh Bhardwaj 
---
 drivers/gpu/drm/amd/amdkfd/kfd_chardev.c | 15 +++
 drivers/gpu/drm/amd/amdkfd/kfd_priv.h|  1 +
 2 files changed, 16 insertions(+)

diff --git a/drivers/gpu/drm/amd/amdkfd/kfd_chardev.c 
b/drivers/gpu/drm/amd/amdkfd/kfd_chardev.c
index ab5107a3fe36..3ec44f71307d 100644
--- a/drivers/gpu/drm/amd/amdkfd/kfd_chardev.c
+++ b/drivers/gpu/drm/amd/amdkfd/kfd_chardev.c
@@ -1848,6 +1848,11 @@ static int criu_checkpoint_process(struct kfd_process *p,
memset(_priv, 0, sizeof(process_priv));
 
process_priv.version = KFD_CRIU_PRIV_VERSION;
+   /* For CR, we don't consider negative xnack mode which is used for
+* querying without changing it, here 0 simply means disabled and 1
+* means enabled so retry for finding a valid PTE.
+*/
+   process_priv.xnack_mode = p->xnack_enabled ? 1 : 0;
 
ret = copy_to_user(user_priv_data + *priv_offset,
_priv, sizeof(process_priv));
@@ -2241,6 +2246,16 @@ static int criu_restore_process(struct kfd_process *p,
return -EINVAL;
}
 
+   pr_debug("Setting XNACK mode\n");
+   if (process_priv.xnack_mode && !kfd_process_xnack_mode(p, true)) {
+   pr_err("xnack mode cannot be set\n");
+   ret = -EPERM;
+   goto exit;
+   } else {
+   pr_debug("set xnack mode: %d\n", process_priv.xnack_mode);
+   p->xnack_enabled = process_priv.xnack_mode;
+   }
+
 exit:
return ret;
 }
diff --git a/drivers/gpu/drm/amd/amdkfd/kfd_priv.h 
b/drivers/gpu/drm/amd/amdkfd/kfd_priv.h
index df68c4274bd9..903ad4a263f0 100644
--- a/drivers/gpu/drm/amd/amdkfd/kfd_priv.h
+++ b/drivers/gpu/drm/amd/amdkfd/kfd_priv.h
@@ -1056,6 +1056,7 @@ void kfd_process_set_trap_handler(struct 
qcm_process_device *qpd,
 
 struct kfd_criu_process_priv_data {
uint32_t version;
+   uint32_t xnack_mode;
 };
 
 struct kfd_criu_device_priv_data {
-- 
2.17.1



[Patch v5 11/24] drm/amdkfd: CRIU restore queue doorbell id

2022-02-03 Thread Rajneesh Bhardwaj
From: David Yat Sin 

When re-creating queues during CRIU restore, restore the queue with the
same doorbell id value used during CRIU dump.

Signed-off-by: David Yat Sin 
---
 .../drm/amd/amdkfd/kfd_device_queue_manager.c | 60 +--
 1 file changed, 41 insertions(+), 19 deletions(-)

diff --git a/drivers/gpu/drm/amd/amdkfd/kfd_device_queue_manager.c 
b/drivers/gpu/drm/amd/amdkfd/kfd_device_queue_manager.c
index 15fa2dc6dcba..13317d2c8959 100644
--- a/drivers/gpu/drm/amd/amdkfd/kfd_device_queue_manager.c
+++ b/drivers/gpu/drm/amd/amdkfd/kfd_device_queue_manager.c
@@ -144,7 +144,13 @@ static void decrement_queue_count(struct 
device_queue_manager *dqm,
dqm->active_cp_queue_count--;
 }
 
-static int allocate_doorbell(struct qcm_process_device *qpd, struct queue *q)
+/*
+ * Allocate a doorbell ID to this queue.
+ * If doorbell_id is passed in, make sure requested ID is valid then allocate 
it.
+ */
+static int allocate_doorbell(struct qcm_process_device *qpd,
+struct queue *q,
+uint32_t const *restore_id)
 {
struct kfd_dev *dev = qpd->dqm->dev;
 
@@ -152,6 +158,10 @@ static int allocate_doorbell(struct qcm_process_device 
*qpd, struct queue *q)
/* On pre-SOC15 chips we need to use the queue ID to
 * preserve the user mode ABI.
 */
+
+   if (restore_id && *restore_id != q->properties.queue_id)
+   return -EINVAL;
+
q->doorbell_id = q->properties.queue_id;
} else if (q->properties.type == KFD_QUEUE_TYPE_SDMA ||
q->properties.type == KFD_QUEUE_TYPE_SDMA_XGMI) {
@@ -160,25 +170,37 @@ static int allocate_doorbell(struct qcm_process_device 
*qpd, struct queue *q)
 * The doobell index distance between RLC (2*i) and (2*i+1)
 * for a SDMA engine is 512.
 */
-   uint32_t *idx_offset =
-   dev->shared_resources.sdma_doorbell_idx;
 
-   q->doorbell_id = idx_offset[q->properties.sdma_engine_id]
-   + (q->properties.sdma_queue_id & 1)
-   * KFD_QUEUE_DOORBELL_MIRROR_OFFSET
-   + (q->properties.sdma_queue_id >> 1);
+   uint32_t *idx_offset = dev->shared_resources.sdma_doorbell_idx;
+   uint32_t valid_id = idx_offset[q->properties.sdma_engine_id]
+   + (q->properties.sdma_queue_id 
& 1)
+   * 
KFD_QUEUE_DOORBELL_MIRROR_OFFSET
+   + (q->properties.sdma_queue_id 
>> 1);
+
+   if (restore_id && *restore_id != valid_id)
+   return -EINVAL;
+   q->doorbell_id = valid_id;
} else {
-   /* For CP queues on SOC15 reserve a free doorbell ID */
-   unsigned int found;
-
-   found = find_first_zero_bit(qpd->doorbell_bitmap,
-   KFD_MAX_NUM_OF_QUEUES_PER_PROCESS);
-   if (found >= KFD_MAX_NUM_OF_QUEUES_PER_PROCESS) {
-   pr_debug("No doorbells available");
-   return -EBUSY;
+   /* For CP queues on SOC15 */
+   if (restore_id) {
+   /* make sure that ID is free  */
+   if (__test_and_set_bit(*restore_id, 
qpd->doorbell_bitmap))
+   return -EINVAL;
+
+   q->doorbell_id = *restore_id;
+   } else {
+   /* or reserve a free doorbell ID */
+   unsigned int found;
+
+   found = find_first_zero_bit(qpd->doorbell_bitmap,
+   
KFD_MAX_NUM_OF_QUEUES_PER_PROCESS);
+   if (found >= KFD_MAX_NUM_OF_QUEUES_PER_PROCESS) {
+   pr_debug("No doorbells available");
+   return -EBUSY;
+   }
+   set_bit(found, qpd->doorbell_bitmap);
+   q->doorbell_id = found;
}
-   set_bit(found, qpd->doorbell_bitmap);
-   q->doorbell_id = found;
}
 
q->properties.doorbell_off =
@@ -346,7 +368,7 @@ static int create_queue_nocpsch(struct device_queue_manager 
*dqm,
dqm->asic_ops.init_sdma_vm(dqm, q, qpd);
}
 
-   retval = allocate_doorbell(qpd, q);
+   retval = allocate_doorbell(qpd, q, qd ? >doorbell_id : NULL);
if (retval)
goto out_deallocate_hqd;
 
@@ -1333,7 +1355,7 @@ static int create_queue_cpsch(struct device_queue_manager 
*dqm, struct queue *q,
goto out;
}
 
-   retval = allocate_doorbell(qpd, q);
+   retval = allocate_doorbell(qpd, q, qd ? >doorbell_id : 

[Patch v5 08/24] drm/amdkfd: CRIU add queues support

2022-02-03 Thread Rajneesh Bhardwaj
From: David Yat Sin 

Add support to existing CRIU ioctl's to save number of queues and queue
properties for each queue during checkpoint and re-create queues on
restore.

Signed-off-by: David Yat Sin 
Signed-off-by: Rajneesh Bhardwaj 
---
 drivers/gpu/drm/amd/amdkfd/kfd_chardev.c  | 110 -
 drivers/gpu/drm/amd/amdkfd/kfd_priv.h |  43 +++-
 .../amd/amdkfd/kfd_process_queue_manager.c| 208 ++
 3 files changed, 353 insertions(+), 8 deletions(-)

diff --git a/drivers/gpu/drm/amd/amdkfd/kfd_chardev.c 
b/drivers/gpu/drm/amd/amdkfd/kfd_chardev.c
index 6af6deeda523..d049f9cbbc79 100644
--- a/drivers/gpu/drm/amd/amdkfd/kfd_chardev.c
+++ b/drivers/gpu/drm/amd/amdkfd/kfd_chardev.c
@@ -2015,19 +2015,36 @@ static int criu_checkpoint_bos(struct kfd_process *p,
return ret;
 }
 
-static void criu_get_process_object_info(struct kfd_process *p,
-uint32_t *num_bos,
-uint64_t *objs_priv_size)
+static int criu_get_process_object_info(struct kfd_process *p,
+   uint32_t *num_bos,
+   uint32_t *num_objects,
+   uint64_t *objs_priv_size)
 {
+   int ret;
uint64_t priv_size;
+   uint32_t num_queues, num_events, num_svm_ranges;
+   uint64_t queues_priv_data_size;
 
*num_bos = get_process_num_bos(p);
 
+   ret = kfd_process_get_queue_info(p, _queues, 
_priv_data_size);
+   if (ret)
+   return ret;
+
+   num_events = 0; /* TODO: Implement Events */
+   num_svm_ranges = 0; /* TODO: Implement SVM-Ranges */
+
+   *num_objects = num_queues + num_events + num_svm_ranges;
+
if (objs_priv_size) {
priv_size = sizeof(struct kfd_criu_process_priv_data);
priv_size += *num_bos * sizeof(struct kfd_criu_bo_priv_data);
+   priv_size += queues_priv_data_size;
+   /* TODO: Add Events priv size */
+   /* TODO: Add SVM ranges priv size */
*objs_priv_size = priv_size;
}
+   return 0;
 }
 
 static int criu_checkpoint(struct file *filep,
@@ -2035,7 +2052,7 @@ static int criu_checkpoint(struct file *filep,
   struct kfd_ioctl_criu_args *args)
 {
int ret;
-   uint32_t num_bos;
+   uint32_t num_bos, num_objects;
uint64_t priv_size, priv_offset = 0;
 
if (!args->bos || !args->priv_data)
@@ -2057,9 +2074,12 @@ static int criu_checkpoint(struct file *filep,
goto exit_unlock;
}
 
-   criu_get_process_object_info(p, _bos, _size);
+   ret = criu_get_process_object_info(p, _bos, _objects, 
_size);
+   if (ret)
+   goto exit_unlock;
 
if (num_bos != args->num_bos ||
+   num_objects != args->num_objects ||
priv_size != args->priv_data_size) {
 
ret = -EINVAL;
@@ -2076,6 +2096,17 @@ static int criu_checkpoint(struct file *filep,
if (ret)
goto exit_unlock;
 
+   if (num_objects) {
+   ret = kfd_criu_checkpoint_queues(p, (uint8_t __user 
*)args->priv_data,
+_offset);
+   if (ret)
+   goto exit_unlock;
+
+   /* TODO: Dump Events */
+
+   /* TODO: Dump SVM-Ranges */
+   }
+
 exit_unlock:
mutex_unlock(>mutex);
if (ret)
@@ -2344,6 +2375,62 @@ static int criu_restore_bos(struct kfd_process *p,
return ret;
 }
 
+static int criu_restore_objects(struct file *filep,
+   struct kfd_process *p,
+   struct kfd_ioctl_criu_args *args,
+   uint64_t *priv_offset,
+   uint64_t max_priv_data_size)
+{
+   int ret = 0;
+   uint32_t i;
+
+   BUILD_BUG_ON(offsetof(struct kfd_criu_queue_priv_data, object_type));
+   BUILD_BUG_ON(offsetof(struct kfd_criu_event_priv_data, object_type));
+   BUILD_BUG_ON(offsetof(struct kfd_criu_svm_range_priv_data, 
object_type));
+
+   for (i = 0; i < args->num_objects; i++) {
+   uint32_t object_type;
+
+   if (*priv_offset + sizeof(object_type) > max_priv_data_size) {
+   pr_err("Invalid private data size\n");
+   return -EINVAL;
+   }
+
+   ret = get_user(object_type, (uint32_t __user *)(args->priv_data 
+ *priv_offset));
+   if (ret) {
+   pr_err("Failed to copy private information from 
user\n");
+   goto exit;
+   }
+
+   switch (object_type) {
+   case KFD_CRIU_OBJECT_TYPE_QUEUE:
+   ret = kfd_criu_restore_queue(p, (uint8_t __user 
*)args->priv_data,
+priv_offset, 

[Patch v5 10/24] drm/amdkfd: CRIU restore sdma id for queues

2022-02-03 Thread Rajneesh Bhardwaj
From: David Yat Sin 

When re-creating queues during CRIU restore, restore the queue with the
same sdma id value used during CRIU dump.

Signed-off-by: David Yat Sin 
Signed-off-by: Rajneesh Bhardwaj 
---
 .../drm/amd/amdkfd/kfd_device_queue_manager.c | 48 ++-
 .../drm/amd/amdkfd/kfd_device_queue_manager.h |  3 +-
 .../amd/amdkfd/kfd_process_queue_manager.c|  4 +-
 3 files changed, 40 insertions(+), 15 deletions(-)

diff --git a/drivers/gpu/drm/amd/amdkfd/kfd_device_queue_manager.c 
b/drivers/gpu/drm/amd/amdkfd/kfd_device_queue_manager.c
index 4b6814949aad..15fa2dc6dcba 100644
--- a/drivers/gpu/drm/amd/amdkfd/kfd_device_queue_manager.c
+++ b/drivers/gpu/drm/amd/amdkfd/kfd_device_queue_manager.c
@@ -58,7 +58,7 @@ static inline void deallocate_hqd(struct device_queue_manager 
*dqm,
struct queue *q);
 static int allocate_hqd(struct device_queue_manager *dqm, struct queue *q);
 static int allocate_sdma_queue(struct device_queue_manager *dqm,
-   struct queue *q);
+   struct queue *q, const uint32_t 
*restore_sdma_id);
 static void kfd_process_hw_exception(struct work_struct *work);
 
 static inline
@@ -299,7 +299,8 @@ static void deallocate_vmid(struct device_queue_manager 
*dqm,
 
 static int create_queue_nocpsch(struct device_queue_manager *dqm,
struct queue *q,
-   struct qcm_process_device *qpd)
+   struct qcm_process_device *qpd,
+   const struct kfd_criu_queue_priv_data *qd)
 {
struct mqd_manager *mqd_mgr;
int retval;
@@ -339,7 +340,7 @@ static int create_queue_nocpsch(struct device_queue_manager 
*dqm,
q->pipe, q->queue);
} else if (q->properties.type == KFD_QUEUE_TYPE_SDMA ||
q->properties.type == KFD_QUEUE_TYPE_SDMA_XGMI) {
-   retval = allocate_sdma_queue(dqm, q);
+   retval = allocate_sdma_queue(dqm, q, qd ? >sdma_id : NULL);
if (retval)
goto deallocate_vmid;
dqm->asic_ops.init_sdma_vm(dqm, q, qpd);
@@ -1034,7 +1035,7 @@ static void pre_reset(struct device_queue_manager *dqm)
 }
 
 static int allocate_sdma_queue(struct device_queue_manager *dqm,
-   struct queue *q)
+   struct queue *q, const uint32_t 
*restore_sdma_id)
 {
int bit;
 
@@ -1044,9 +1045,21 @@ static int allocate_sdma_queue(struct 
device_queue_manager *dqm,
return -ENOMEM;
}
 
-   bit = __ffs64(dqm->sdma_bitmap);
-   dqm->sdma_bitmap &= ~(1ULL << bit);
-   q->sdma_id = bit;
+   if (restore_sdma_id) {
+   /* Re-use existing sdma_id */
+   if (!(dqm->sdma_bitmap & (1ULL << *restore_sdma_id))) {
+   pr_err("SDMA queue already in use\n");
+   return -EBUSY;
+   }
+   dqm->sdma_bitmap &= ~(1ULL << *restore_sdma_id);
+   q->sdma_id = *restore_sdma_id;
+   } else {
+   /* Find first available sdma_id */
+   bit = __ffs64(dqm->sdma_bitmap);
+   dqm->sdma_bitmap &= ~(1ULL << bit);
+   q->sdma_id = bit;
+   }
+
q->properties.sdma_engine_id = q->sdma_id %
kfd_get_num_sdma_engines(dqm->dev);
q->properties.sdma_queue_id = q->sdma_id /
@@ -1056,9 +1069,19 @@ static int allocate_sdma_queue(struct 
device_queue_manager *dqm,
pr_err("No more XGMI SDMA queue to allocate\n");
return -ENOMEM;
}
-   bit = __ffs64(dqm->xgmi_sdma_bitmap);
-   dqm->xgmi_sdma_bitmap &= ~(1ULL << bit);
-   q->sdma_id = bit;
+   if (restore_sdma_id) {
+   /* Re-use existing sdma_id */
+   if (!(dqm->xgmi_sdma_bitmap & (1ULL << 
*restore_sdma_id))) {
+   pr_err("SDMA queue already in use\n");
+   return -EBUSY;
+   }
+   dqm->xgmi_sdma_bitmap &= ~(1ULL << *restore_sdma_id);
+   q->sdma_id = *restore_sdma_id;
+   } else {
+   bit = __ffs64(dqm->xgmi_sdma_bitmap);
+   dqm->xgmi_sdma_bitmap &= ~(1ULL << bit);
+   q->sdma_id = bit;
+   }
/* sdma_engine_id is sdma id including
 * both PCIe-optimized SDMAs and XGMI-
 * optimized SDMAs. The calculation below
@@ -1288,7 +1311,8 @@ static void destroy_kernel_queue_cpsch(struct 
device_queue_manager *dqm,
 }
 
 static 

[Patch v5 07/24] drm/amdkfd: CRIU Implement KFD unpause operation

2022-02-03 Thread Rajneesh Bhardwaj
From: David Yat Sin 

Introducing UNPAUSE op. After CRIU amdgpu plugin performs a PROCESS_INFO
op the queues will be stay in an evicted state. Once the plugin is done
draining BO contents, it is safe to perform an UNPAUSE op for the queues
to resume.

Signed-off-by: David Yat Sin 
Signed-off-by: Rajneesh Bhardwaj 
---
 drivers/gpu/drm/amd/amdkfd/kfd_chardev.c | 37 +++-
 drivers/gpu/drm/amd/amdkfd/kfd_priv.h|  2 ++
 drivers/gpu/drm/amd/amdkfd/kfd_process.c |  1 +
 3 files changed, 39 insertions(+), 1 deletion(-)

diff --git a/drivers/gpu/drm/amd/amdkfd/kfd_chardev.c 
b/drivers/gpu/drm/amd/amdkfd/kfd_chardev.c
index 95fc5668195c..6af6deeda523 100644
--- a/drivers/gpu/drm/amd/amdkfd/kfd_chardev.c
+++ b/drivers/gpu/drm/amd/amdkfd/kfd_chardev.c
@@ -2049,6 +2049,14 @@ static int criu_checkpoint(struct file *filep,
goto exit_unlock;
}
 
+   /* Confirm all process queues are evicted */
+   if (!p->queues_paused) {
+   pr_err("Cannot dump process when queues are not in evicted 
state\n");
+   /* CRIU plugin did not call op PROCESS_INFO before 
checkpointing */
+   ret = -EINVAL;
+   goto exit_unlock;
+   }
+
criu_get_process_object_info(p, _bos, _size);
 
if (num_bos != args->num_bos ||
@@ -2388,7 +2396,24 @@ static int criu_unpause(struct file *filep,
struct kfd_process *p,
struct kfd_ioctl_criu_args *args)
 {
-   return 0;
+   int ret;
+
+   mutex_lock(>mutex);
+
+   if (!p->queues_paused) {
+   mutex_unlock(>mutex);
+   return -EINVAL;
+   }
+
+   ret = kfd_process_restore_queues(p);
+   if (ret)
+   pr_err("Failed to unpause queues ret:%d\n", ret);
+   else
+   p->queues_paused = false;
+
+   mutex_unlock(>mutex);
+
+   return ret;
 }
 
 static int criu_resume(struct file *filep,
@@ -2440,6 +2465,12 @@ static int criu_process_info(struct file *filep,
goto err_unlock;
}
 
+   ret = kfd_process_evict_queues(p);
+   if (ret)
+   goto err_unlock;
+
+   p->queues_paused = true;
+
args->pid = task_pid_nr_ns(p->lead_thread,
task_active_pid_ns(p->lead_thread));
 
@@ -2447,6 +2478,10 @@ static int criu_process_info(struct file *filep,
 
dev_dbg(kfd_device, "Num of bos:%u\n", args->num_bos);
 err_unlock:
+   if (ret) {
+   kfd_process_restore_queues(p);
+   p->queues_paused = false;
+   }
mutex_unlock(>mutex);
return ret;
 }
diff --git a/drivers/gpu/drm/amd/amdkfd/kfd_priv.h 
b/drivers/gpu/drm/amd/amdkfd/kfd_priv.h
index 9b347247055c..677f21447112 100644
--- a/drivers/gpu/drm/amd/amdkfd/kfd_priv.h
+++ b/drivers/gpu/drm/amd/amdkfd/kfd_priv.h
@@ -877,6 +877,8 @@ struct kfd_process {
bool xnack_enabled;
 
atomic_t poison;
+   /* Queues are in paused stated because we are in the process of doing a 
CRIU checkpoint */
+   bool queues_paused;
 };
 
 #define KFD_PROCESS_TABLE_SIZE 5 /* bits: 32 entries */
diff --git a/drivers/gpu/drm/amd/amdkfd/kfd_process.c 
b/drivers/gpu/drm/amd/amdkfd/kfd_process.c
index b3198e186622..0649064b8e95 100644
--- a/drivers/gpu/drm/amd/amdkfd/kfd_process.c
+++ b/drivers/gpu/drm/amd/amdkfd/kfd_process.c
@@ -1384,6 +1384,7 @@ static struct kfd_process *create_process(const struct 
task_struct *thread)
process->mm = thread->mm;
process->lead_thread = thread->group_leader;
process->n_pdds = 0;
+   process->queues_paused = false;
INIT_DELAYED_WORK(>eviction_work, evict_process_worker);
INIT_DELAYED_WORK(>restore_work, restore_process_worker);
process->last_restore_timestamp = get_jiffies_64();
-- 
2.17.1



[Patch v5 05/24] drm/amdkfd: CRIU Implement KFD restore ioctl

2022-02-03 Thread Rajneesh Bhardwaj
This implements the KFD CRIU Restore ioctl that lays the basic
foundation for the CRIU restore operation. It provides support to
create the buffer objects corresponding to the checkpointed image.
This ioctl creates various types of buffer objects such as VRAM,
MMIO, Doorbell, GTT based on the date sent from the userspace plugin.
The data mostly contains the previously checkpointed KFD images from
some KFD processs.

While restoring a criu process, attach old IDR values to newly
created BOs. This also adds the minimal gpu mapping support for a single
gpu checkpoint restore use case.

Signed-off-by: David Yat Sin 
Signed-off-by: Rajneesh Bhardwaj 
---
 drivers/gpu/drm/amd/amdkfd/kfd_chardev.c | 298 ++-
 1 file changed, 297 insertions(+), 1 deletion(-)

diff --git a/drivers/gpu/drm/amd/amdkfd/kfd_chardev.c 
b/drivers/gpu/drm/amd/amdkfd/kfd_chardev.c
index 17a937b7139f..342fc56b1940 100644
--- a/drivers/gpu/drm/amd/amdkfd/kfd_chardev.c
+++ b/drivers/gpu/drm/amd/amdkfd/kfd_chardev.c
@@ -2078,11 +2078,307 @@ static int criu_checkpoint(struct file *filep,
return ret;
 }
 
+static int criu_restore_process(struct kfd_process *p,
+   struct kfd_ioctl_criu_args *args,
+   uint64_t *priv_offset,
+   uint64_t max_priv_data_size)
+{
+   int ret = 0;
+   struct kfd_criu_process_priv_data process_priv;
+
+   if (*priv_offset + sizeof(process_priv) > max_priv_data_size)
+   return -EINVAL;
+
+   ret = copy_from_user(_priv,
+   (void __user *)(args->priv_data + *priv_offset),
+   sizeof(process_priv));
+   if (ret) {
+   pr_err("Failed to copy process private information from 
user\n");
+   ret = -EFAULT;
+   goto exit;
+   }
+   *priv_offset += sizeof(process_priv);
+
+   if (process_priv.version != KFD_CRIU_PRIV_VERSION) {
+   pr_err("Invalid CRIU API version (checkpointed:%d 
current:%d)\n",
+   process_priv.version, KFD_CRIU_PRIV_VERSION);
+   return -EINVAL;
+   }
+
+exit:
+   return ret;
+}
+
+static int criu_restore_bos(struct kfd_process *p,
+   struct kfd_ioctl_criu_args *args,
+   uint64_t *priv_offset,
+   uint64_t max_priv_data_size)
+{
+   struct kfd_criu_bo_bucket *bo_buckets;
+   struct kfd_criu_bo_priv_data *bo_privs;
+   bool flush_tlbs = false;
+   int ret = 0, j = 0;
+   uint32_t i;
+
+   if (*priv_offset + (args->num_bos * sizeof(*bo_privs)) > 
max_priv_data_size)
+   return -EINVAL;
+
+   bo_buckets = kvmalloc_array(args->num_bos, sizeof(*bo_buckets), 
GFP_KERNEL);
+   if (!bo_buckets)
+   return -ENOMEM;
+
+   ret = copy_from_user(bo_buckets, (void __user *)args->bos,
+args->num_bos * sizeof(*bo_buckets));
+   if (ret) {
+   pr_err("Failed to copy BOs information from user\n");
+   ret = -EFAULT;
+   goto exit;
+   }
+
+   bo_privs = kvmalloc_array(args->num_bos, sizeof(*bo_privs), GFP_KERNEL);
+   if (!bo_privs) {
+   ret = -ENOMEM;
+   goto exit;
+   }
+
+   ret = copy_from_user(bo_privs, (void __user *)args->priv_data + 
*priv_offset,
+args->num_bos * sizeof(*bo_privs));
+   if (ret) {
+   pr_err("Failed to copy BOs information from user\n");
+   ret = -EFAULT;
+   goto exit;
+   }
+   *priv_offset += args->num_bos * sizeof(*bo_privs);
+
+   /* Create and map new BOs */
+   for (i = 0; i < args->num_bos; i++) {
+   struct kfd_criu_bo_bucket *bo_bucket;
+   struct kfd_criu_bo_priv_data *bo_priv;
+   struct kfd_dev *dev;
+   struct kfd_process_device *pdd;
+   void *mem;
+   u64 offset;
+   int idr_handle;
+
+   bo_bucket = _buckets[i];
+   bo_priv = _privs[i];
+
+   dev = kfd_device_by_id(bo_bucket->gpu_id);
+   if (!dev) {
+   ret = -EINVAL;
+   pr_err("Failed to get pdd\n");
+   goto exit;
+   }
+   pdd = kfd_get_process_device_data(dev, p);
+   if (!pdd) {
+   ret = -EINVAL;
+   pr_err("Failed to get pdd\n");
+   goto exit;
+   }
+
+   pr_debug("kfd restore ioctl - bo_bucket[%d]:\n", i);
+   pr_debug("size = 0x%llx, bo_addr = 0x%llx bo_offset = 0x%llx\n"
+   "gpu_id = 0x%x alloc_flags = 0x%x\n"
+   "idr_handle = 0x%x\n",
+   bo_bucket->size,
+   bo_bucket->addr,
+  

[Patch v5 09/24] drm/amdkfd: CRIU restore queue ids

2022-02-03 Thread Rajneesh Bhardwaj
From: David Yat Sin 

When re-creating queues during CRIU restore, restore the queue with the
same queue id value used during CRIU dump.

Signed-off-by: Rajneesh Bhardwaj 
Signed-off-by: David Yat Sin 
---
 drivers/gpu/drm/amd/amdkfd/kfd_chardev.c  |  2 +-
 drivers/gpu/drm/amd/amdkfd/kfd_dbgdev.c   |  2 +-
 drivers/gpu/drm/amd/amdkfd/kfd_priv.h |  2 +
 .../amd/amdkfd/kfd_process_queue_manager.c| 37 +++
 4 files changed, 34 insertions(+), 9 deletions(-)

diff --git a/drivers/gpu/drm/amd/amdkfd/kfd_chardev.c 
b/drivers/gpu/drm/amd/amdkfd/kfd_chardev.c
index d049f9cbbc79..d35911550792 100644
--- a/drivers/gpu/drm/amd/amdkfd/kfd_chardev.c
+++ b/drivers/gpu/drm/amd/amdkfd/kfd_chardev.c
@@ -311,7 +311,7 @@ static int kfd_ioctl_create_queue(struct file *filep, 
struct kfd_process *p,
p->pasid,
dev->id);
 
-   err = pqm_create_queue(>pqm, dev, filep, _properties, _id,
+   err = pqm_create_queue(>pqm, dev, filep, _properties, _id, 
NULL,
_offset_in_process);
if (err != 0)
goto err_create_queue;
diff --git a/drivers/gpu/drm/amd/amdkfd/kfd_dbgdev.c 
b/drivers/gpu/drm/amd/amdkfd/kfd_dbgdev.c
index 1e30717b5253..0c50e67e2b51 100644
--- a/drivers/gpu/drm/amd/amdkfd/kfd_dbgdev.c
+++ b/drivers/gpu/drm/amd/amdkfd/kfd_dbgdev.c
@@ -185,7 +185,7 @@ static int dbgdev_register_diq(struct kfd_dbgdev *dbgdev)
properties.type = KFD_QUEUE_TYPE_DIQ;
 
status = pqm_create_queue(dbgdev->pqm, dbgdev->dev, NULL,
-   , , NULL);
+   , , NULL, NULL);
 
if (status) {
pr_err("Failed to create DIQ\n");
diff --git a/drivers/gpu/drm/amd/amdkfd/kfd_priv.h 
b/drivers/gpu/drm/amd/amdkfd/kfd_priv.h
index 41aa7b150a96..59125d8f16a7 100644
--- a/drivers/gpu/drm/amd/amdkfd/kfd_priv.h
+++ b/drivers/gpu/drm/amd/amdkfd/kfd_priv.h
@@ -461,6 +461,7 @@ enum KFD_QUEUE_PRIORITY {
  * it's user mode or kernel mode queue.
  *
  */
+
 struct queue_properties {
enum kfd_queue_type type;
enum kfd_queue_format format;
@@ -1156,6 +1157,7 @@ int pqm_create_queue(struct process_queue_manager *pqm,
struct file *f,
struct queue_properties *properties,
unsigned int *qid,
+   const struct kfd_criu_queue_priv_data *q_data,
uint32_t *p_doorbell_offset_in_process);
 int pqm_destroy_queue(struct process_queue_manager *pqm, unsigned int qid);
 int pqm_update_queue_properties(struct process_queue_manager *pqm, unsigned 
int qid,
diff --git a/drivers/gpu/drm/amd/amdkfd/kfd_process_queue_manager.c 
b/drivers/gpu/drm/amd/amdkfd/kfd_process_queue_manager.c
index 38d3217f0f67..75bad4381421 100644
--- a/drivers/gpu/drm/amd/amdkfd/kfd_process_queue_manager.c
+++ b/drivers/gpu/drm/amd/amdkfd/kfd_process_queue_manager.c
@@ -42,6 +42,20 @@ static inline struct process_queue_node *get_queue_by_qid(
return NULL;
 }
 
+static int assign_queue_slot_by_qid(struct process_queue_manager *pqm,
+   unsigned int qid)
+{
+   if (qid >= KFD_MAX_NUM_OF_QUEUES_PER_PROCESS)
+   return -EINVAL;
+
+   if (__test_and_set_bit(qid, pqm->queue_slot_bitmap)) {
+   pr_err("Cannot create new queue because requested qid(%u) is in 
use\n", qid);
+   return -ENOSPC;
+   }
+
+   return 0;
+}
+
 static int find_available_queue_slot(struct process_queue_manager *pqm,
unsigned int *qid)
 {
@@ -193,6 +207,7 @@ int pqm_create_queue(struct process_queue_manager *pqm,
struct file *f,
struct queue_properties *properties,
unsigned int *qid,
+   const struct kfd_criu_queue_priv_data *q_data,
uint32_t *p_doorbell_offset_in_process)
 {
int retval;
@@ -224,7 +239,12 @@ int pqm_create_queue(struct process_queue_manager *pqm,
if (pdd->qpd.queue_count >= max_queues)
return -ENOSPC;
 
-   retval = find_available_queue_slot(pqm, qid);
+   if (q_data) {
+   retval = assign_queue_slot_by_qid(pqm, q_data->q_id);
+   *qid = q_data->q_id;
+   } else
+   retval = find_available_queue_slot(pqm, qid);
+
if (retval != 0)
return retval;
 
@@ -527,7 +547,7 @@ int kfd_process_get_queue_info(struct kfd_process *p,
return 0;
 }
 
-static void criu_dump_queue(struct kfd_process_device *pdd,
+static void criu_checkpoint_queue(struct kfd_process_device *pdd,
   struct queue *q,
   struct kfd_criu_queue_priv_data *q_data)
 {
@@ -559,7 +579,7 @@ static void criu_dump_queue(struct kfd_process_device *pdd,
pr_debug("Dumping Queue: gpu_id:%x 

[Patch v5 02/24] drm/amdkfd: CRIU Introduce Checkpoint-Restore APIs

2022-02-03 Thread Rajneesh Bhardwaj
Checkpoint-Restore in userspace (CRIU) is a powerful tool that can
snapshot a running process and later restore it on same or a remote
machine but expects the processes that have a device file (e.g. GPU)
associated with them, provide necessary driver support to assist CRIU
and its extensible plugin interface. Thus, In order to support the
Checkpoint-Restore of any ROCm process, the AMD Radeon Open Compute
Kernel driver, needs to provide a set of new APIs that provide
necessary VRAM metadata and its contents to a userspace component
(CRIU plugin) that can store it in form of image files.

This introduces some new ioctls which will be used to checkpoint-Restore
any KFD bound user process. KFD only allows ioctl calls from the same
process that opened the KFD file descriptor. Since these ioctls are
expected to be called from a KFD criu plugin which has elevated ptrace
attached privileges and CAP_CHECKPOINT_RESTORE capabilities attached with
the file descriptors so modify KFD to allow such calls.

(API redesigned by David Yat Sin)
Suggested-by: Felix Kuehling 
Reviewed-by: Felix Kuehling 
Signed-off-by: David Yat Sin 
Signed-off-by: Rajneesh Bhardwaj 
---
 drivers/gpu/drm/amd/amdkfd/kfd_chardev.c | 98 +++-
 drivers/gpu/drm/amd/amdkfd/kfd_priv.h| 65 +++-
 include/uapi/linux/kfd_ioctl.h   | 81 +++-
 3 files changed, 241 insertions(+), 3 deletions(-)

diff --git a/drivers/gpu/drm/amd/amdkfd/kfd_chardev.c 
b/drivers/gpu/drm/amd/amdkfd/kfd_chardev.c
index 214a2c67fba4..90e6d9e335a5 100644
--- a/drivers/gpu/drm/amd/amdkfd/kfd_chardev.c
+++ b/drivers/gpu/drm/amd/amdkfd/kfd_chardev.c
@@ -33,6 +33,7 @@
 #include 
 #include 
 #include 
+#include 
 #include 
 #include 
 #include "kfd_priv.h"
@@ -1859,6 +1860,75 @@ static int kfd_ioctl_svm(struct file *filep, struct 
kfd_process *p, void *data)
 }
 #endif
 
+static int criu_checkpoint(struct file *filep,
+  struct kfd_process *p,
+  struct kfd_ioctl_criu_args *args)
+{
+   return 0;
+}
+
+static int criu_restore(struct file *filep,
+   struct kfd_process *p,
+   struct kfd_ioctl_criu_args *args)
+{
+   return 0;
+}
+
+static int criu_unpause(struct file *filep,
+   struct kfd_process *p,
+   struct kfd_ioctl_criu_args *args)
+{
+   return 0;
+}
+
+static int criu_resume(struct file *filep,
+   struct kfd_process *p,
+   struct kfd_ioctl_criu_args *args)
+{
+   return 0;
+}
+
+static int criu_process_info(struct file *filep,
+   struct kfd_process *p,
+   struct kfd_ioctl_criu_args *args)
+{
+   return 0;
+}
+
+static int kfd_ioctl_criu(struct file *filep, struct kfd_process *p, void 
*data)
+{
+   struct kfd_ioctl_criu_args *args = data;
+   int ret;
+
+   dev_dbg(kfd_device, "CRIU operation: %d\n", args->op);
+   switch (args->op) {
+   case KFD_CRIU_OP_PROCESS_INFO:
+   ret = criu_process_info(filep, p, args);
+   break;
+   case KFD_CRIU_OP_CHECKPOINT:
+   ret = criu_checkpoint(filep, p, args);
+   break;
+   case KFD_CRIU_OP_UNPAUSE:
+   ret = criu_unpause(filep, p, args);
+   break;
+   case KFD_CRIU_OP_RESTORE:
+   ret = criu_restore(filep, p, args);
+   break;
+   case KFD_CRIU_OP_RESUME:
+   ret = criu_resume(filep, p, args);
+   break;
+   default:
+   dev_dbg(kfd_device, "Unsupported CRIU operation:%d\n", 
args->op);
+   ret = -EINVAL;
+   break;
+   }
+
+   if (ret)
+   dev_dbg(kfd_device, "CRIU operation:%d err:%d\n", args->op, 
ret);
+
+   return ret;
+}
+
 #define AMDKFD_IOCTL_DEF(ioctl, _func, _flags) \
[_IOC_NR(ioctl)] = {.cmd = ioctl, .func = _func, .flags = _flags, \
.cmd_drv = 0, .name = #ioctl}
@@ -1962,6 +2032,10 @@ static const struct amdkfd_ioctl_desc amdkfd_ioctls[] = {
 
AMDKFD_IOCTL_DEF(AMDKFD_IOC_SET_XNACK_MODE,
kfd_ioctl_set_xnack_mode, 0),
+
+   AMDKFD_IOCTL_DEF(AMDKFD_IOC_CRIU_OP,
+   kfd_ioctl_criu, KFD_IOC_FLAG_CHECKPOINT_RESTORE),
+
 };
 
 #define AMDKFD_CORE_IOCTL_COUNTARRAY_SIZE(amdkfd_ioctls)
@@ -1976,6 +2050,7 @@ static long kfd_ioctl(struct file *filep, unsigned int 
cmd, unsigned long arg)
char *kdata = NULL;
unsigned int usize, asize;
int retcode = -EINVAL;
+   bool ptrace_attached = false;
 
if (nr >= AMDKFD_CORE_IOCTL_COUNT)
goto err_i1;
@@ -2001,7 +2076,15 @@ static long kfd_ioctl(struct file *filep, unsigned int 
cmd, unsigned long arg)
 * processes need to create their own KFD device context.
 */
process = filep->private_data;
-   if 

[Patch v5 04/24] drm/amdkfd: CRIU Implement KFD checkpoint ioctl

2022-02-03 Thread Rajneesh Bhardwaj
This adds support to discover the  buffer objects that belong to a
process being checkpointed. The data corresponding to these buffer
objects is returned to user space plugin running under criu master
context which then stores this info to recreate these buffer objects
during a restore operation.

Signed-off-by: David Yat Sin 
Signed-off-by: Rajneesh Bhardwaj 
---
 drivers/gpu/drm/amd/amdgpu/amdgpu_amdkfd.h|   1 +
 .../gpu/drm/amd/amdgpu/amdgpu_amdkfd_gpuvm.c  |  11 ++
 drivers/gpu/drm/amd/amdgpu/amdgpu_ttm.c   |  20 ++
 drivers/gpu/drm/amd/amdgpu/amdgpu_ttm.h   |   2 +
 drivers/gpu/drm/amd/amdkfd/kfd_chardev.c  | 177 +-
 drivers/gpu/drm/amd/amdkfd/kfd_priv.h |   4 +-
 6 files changed, 213 insertions(+), 2 deletions(-)

diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_amdkfd.h 
b/drivers/gpu/drm/amd/amdgpu/amdgpu_amdkfd.h
index ac841ae8f5cc..395ba9566afe 100644
--- a/drivers/gpu/drm/amd/amdgpu/amdgpu_amdkfd.h
+++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_amdkfd.h
@@ -297,6 +297,7 @@ int amdgpu_amdkfd_get_tile_config(struct amdgpu_device 
*adev,
struct tile_config *config);
 void amdgpu_amdkfd_ras_poison_consumption_handler(struct amdgpu_device *adev,
bool reset);
+bool amdgpu_amdkfd_bo_mapped_to_dev(struct amdgpu_device *adev, struct kgd_mem 
*mem);
 #if IS_ENABLED(CONFIG_HSA_AMD)
 void amdgpu_amdkfd_gpuvm_init_mem_limits(void);
 void amdgpu_amdkfd_gpuvm_destroy_cb(struct amdgpu_device *adev,
diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_amdkfd_gpuvm.c 
b/drivers/gpu/drm/amd/amdgpu/amdgpu_amdkfd_gpuvm.c
index 5df387c4d7fb..3485ef856860 100644
--- a/drivers/gpu/drm/amd/amdgpu/amdgpu_amdkfd_gpuvm.c
+++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_amdkfd_gpuvm.c
@@ -2629,3 +2629,14 @@ int amdgpu_amdkfd_get_tile_config(struct amdgpu_device 
*adev,
 
return 0;
 }
+
+bool amdgpu_amdkfd_bo_mapped_to_dev(struct amdgpu_device *adev, struct kgd_mem 
*mem)
+{
+   struct kfd_mem_attachment *entry;
+
+   list_for_each_entry(entry, >attachments, list) {
+   if (entry->is_mapped && entry->adev == adev)
+   return true;
+   }
+   return false;
+}
diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_ttm.c 
b/drivers/gpu/drm/amd/amdgpu/amdgpu_ttm.c
index b9637d1cf147..5a32ee66d8c8 100644
--- a/drivers/gpu/drm/amd/amdgpu/amdgpu_ttm.c
+++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_ttm.c
@@ -1127,6 +1127,26 @@ static void amdgpu_ttm_tt_unpopulate(struct ttm_device 
*bdev,
return ttm_pool_free(>mman.bdev.pool, ttm);
 }
 
+/**
+ * amdgpu_ttm_tt_get_userptr - Return the userptr GTT ttm_tt for the current
+ * task
+ *
+ * @tbo: The ttm_buffer_object that contains the userptr
+ * @user_addr:  The returned value
+ */
+int amdgpu_ttm_tt_get_userptr(const struct ttm_buffer_object *tbo,
+ uint64_t *user_addr)
+{
+   struct amdgpu_ttm_tt *gtt;
+
+   if (!tbo->ttm)
+   return -EINVAL;
+
+   gtt = (void *)tbo->ttm;
+   *user_addr = gtt->userptr;
+   return 0;
+}
+
 /**
  * amdgpu_ttm_tt_set_userptr - Initialize userptr GTT ttm_tt for the current
  * task
diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_ttm.h 
b/drivers/gpu/drm/amd/amdgpu/amdgpu_ttm.h
index d9691f262f16..39d966e7185d 100644
--- a/drivers/gpu/drm/amd/amdgpu/amdgpu_ttm.h
+++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_ttm.h
@@ -181,6 +181,8 @@ static inline bool amdgpu_ttm_tt_get_user_pages_done(struct 
ttm_tt *ttm)
 #endif
 
 void amdgpu_ttm_tt_set_user_pages(struct ttm_tt *ttm, struct page **pages);
+int amdgpu_ttm_tt_get_userptr(const struct ttm_buffer_object *tbo,
+ uint64_t *user_addr);
 int amdgpu_ttm_tt_set_userptr(struct ttm_buffer_object *bo,
  uint64_t addr, uint32_t flags);
 bool amdgpu_ttm_tt_has_userptr(struct ttm_tt *ttm);
diff --git a/drivers/gpu/drm/amd/amdkfd/kfd_chardev.c 
b/drivers/gpu/drm/amd/amdkfd/kfd_chardev.c
index 29443419bbf0..17a937b7139f 100644
--- a/drivers/gpu/drm/amd/amdkfd/kfd_chardev.c
+++ b/drivers/gpu/drm/amd/amdkfd/kfd_chardev.c
@@ -1860,6 +1860,29 @@ static int kfd_ioctl_svm(struct file *filep, struct 
kfd_process *p, void *data)
 }
 #endif
 
+static int criu_checkpoint_process(struct kfd_process *p,
+uint8_t __user *user_priv_data,
+uint64_t *priv_offset)
+{
+   struct kfd_criu_process_priv_data process_priv;
+   int ret;
+
+   memset(_priv, 0, sizeof(process_priv));
+
+   process_priv.version = KFD_CRIU_PRIV_VERSION;
+
+   ret = copy_to_user(user_priv_data + *priv_offset,
+   _priv, sizeof(process_priv));
+
+   if (ret) {
+   pr_err("Failed to copy process information to user\n");
+   ret = -EFAULT;
+   }
+
+   *priv_offset += sizeof(process_priv);
+   return ret;
+}
+
 uint32_t get_process_num_bos(struct kfd_process *p)
 {
uint32_t num_of_bos = 

[Patch v5 06/24] drm/amdkfd: CRIU Implement KFD resume ioctl

2022-02-03 Thread Rajneesh Bhardwaj
This adds support to create userptr BOs on restore and introduces a new
ioctl op to restart memory notifiers for the restored userptr BOs.
When doing CRIU restore MMU notifications can happen anytime after we call
amdgpu_mn_register. Prevent MMU notifications until we reach stage-4 of the
restore process i.e. criu_resume ioctl op is received, and the process is
ready to be resumed. This ioctl is different from other KFD CRIU ioctls
since its called by CRIU master restore process for all the target
processes being resumed by CRIU.

Signed-off-by: David Yat Sin 
Signed-off-by: Rajneesh Bhardwaj 
---
 drivers/gpu/drm/amd/amdgpu/amdgpu_amdkfd.h|  6 ++-
 .../gpu/drm/amd/amdgpu/amdgpu_amdkfd_gpuvm.c  | 53 +--
 drivers/gpu/drm/amd/amdkfd/kfd_chardev.c  | 41 --
 drivers/gpu/drm/amd/amdkfd/kfd_priv.h |  1 +
 drivers/gpu/drm/amd/amdkfd/kfd_process.c  | 35 ++--
 5 files changed, 122 insertions(+), 14 deletions(-)

diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_amdkfd.h 
b/drivers/gpu/drm/amd/amdgpu/amdgpu_amdkfd.h
index 395ba9566afe..4cb14c2fe53f 100644
--- a/drivers/gpu/drm/amd/amdgpu/amdgpu_amdkfd.h
+++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_amdkfd.h
@@ -131,6 +131,7 @@ struct amdkfd_process_info {
atomic_t evicted_bos;
struct delayed_work restore_userptr_work;
struct pid *pid;
+   bool block_mmu_notifications;
 };
 
 int amdgpu_amdkfd_init(void);
@@ -268,7 +269,7 @@ uint64_t amdgpu_amdkfd_gpuvm_get_process_page_dir(void 
*drm_priv);
 int amdgpu_amdkfd_gpuvm_alloc_memory_of_gpu(
struct amdgpu_device *adev, uint64_t va, uint64_t size,
void *drm_priv, struct kgd_mem **mem,
-   uint64_t *offset, uint32_t flags);
+   uint64_t *offset, uint32_t flags, bool criu_resume);
 int amdgpu_amdkfd_gpuvm_free_memory_of_gpu(
struct amdgpu_device *adev, struct kgd_mem *mem, void *drm_priv,
uint64_t *size);
@@ -298,6 +299,9 @@ int amdgpu_amdkfd_get_tile_config(struct amdgpu_device 
*adev,
 void amdgpu_amdkfd_ras_poison_consumption_handler(struct amdgpu_device *adev,
bool reset);
 bool amdgpu_amdkfd_bo_mapped_to_dev(struct amdgpu_device *adev, struct kgd_mem 
*mem);
+void amdgpu_amdkfd_block_mmu_notifications(void *p);
+int amdgpu_amdkfd_criu_resume(void *p);
+
 #if IS_ENABLED(CONFIG_HSA_AMD)
 void amdgpu_amdkfd_gpuvm_init_mem_limits(void);
 void amdgpu_amdkfd_gpuvm_destroy_cb(struct amdgpu_device *adev,
diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_amdkfd_gpuvm.c 
b/drivers/gpu/drm/amd/amdgpu/amdgpu_amdkfd_gpuvm.c
index 3485ef856860..69dc9e4d841c 100644
--- a/drivers/gpu/drm/amd/amdgpu/amdgpu_amdkfd_gpuvm.c
+++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_amdkfd_gpuvm.c
@@ -842,7 +842,8 @@ static void remove_kgd_mem_from_kfd_bo_list(struct kgd_mem 
*mem,
  *
  * Returns 0 for success, negative errno for errors.
  */
-static int init_user_pages(struct kgd_mem *mem, uint64_t user_addr)
+static int init_user_pages(struct kgd_mem *mem, uint64_t user_addr,
+  bool criu_resume)
 {
struct amdkfd_process_info *process_info = mem->process_info;
struct amdgpu_bo *bo = mem->bo;
@@ -864,6 +865,18 @@ static int init_user_pages(struct kgd_mem *mem, uint64_t 
user_addr)
goto out;
}
 
+   if (criu_resume) {
+   /*
+* During a CRIU restore operation, the userptr buffer objects
+* will be validated in the restore_userptr_work worker at a
+* later stage when it is scheduled by another ioctl called by
+* CRIU master process for the target pid for restore.
+*/
+   atomic_inc(>invalid);
+   mutex_unlock(_info->lock);
+   return 0;
+   }
+
ret = amdgpu_ttm_tt_get_user_pages(bo, bo->tbo.ttm->pages);
if (ret) {
pr_err("%s: Failed to get user pages: %d\n", __func__, ret);
@@ -1452,10 +1465,39 @@ uint64_t amdgpu_amdkfd_gpuvm_get_process_page_dir(void 
*drm_priv)
return avm->pd_phys_addr;
 }
 
+void amdgpu_amdkfd_block_mmu_notifications(void *p)
+{
+   struct amdkfd_process_info *pinfo = (struct amdkfd_process_info *)p;
+
+   mutex_lock(>lock);
+   WRITE_ONCE(pinfo->block_mmu_notifications, true);
+   mutex_unlock(>lock);
+}
+
+int amdgpu_amdkfd_criu_resume(void *p)
+{
+   int ret = 0;
+   struct amdkfd_process_info *pinfo = (struct amdkfd_process_info *)p;
+
+   mutex_lock(>lock);
+   pr_debug("scheduling work\n");
+   atomic_inc(>evicted_bos);
+   if (!READ_ONCE(pinfo->block_mmu_notifications)) {
+   ret = -EINVAL;
+   goto out_unlock;
+   }
+   WRITE_ONCE(pinfo->block_mmu_notifications, false);
+   schedule_delayed_work(>restore_userptr_work, 0);
+
+out_unlock:
+   mutex_unlock(>lock);
+   return ret;
+}
+
 int 

[Patch v5 03/24] drm/amdkfd: CRIU Implement KFD process_info ioctl

2022-02-03 Thread Rajneesh Bhardwaj
This IOCTL op is expected to be called as a precursor to the actual
Checkpoint operation. This does the basic discovery into the target
process seized by CRIU and relays the information to the userspace that
utilizes it to start the Checkpoint operation via another dedicated
IOCTL op.

The process_info IOCTL op determines the number of GPUs, buffer objects
that are associated with the target process, its process id in
caller's namespace since /proc/pid/mem interface maybe used to drain
the contents of the discovered buffer objects in userspace and getpid
returns the pid of CRIU dumper process. Also the pid of a process
inside a container might be different than its global pid so return
the ns pid.

Signed-off-by: Rajneesh Bhardwaj 
Signed-off-by: David Yat Sin 
---
 drivers/gpu/drm/amd/amdkfd/kfd_chardev.c | 56 +++-
 1 file changed, 55 insertions(+), 1 deletion(-)

diff --git a/drivers/gpu/drm/amd/amdkfd/kfd_chardev.c 
b/drivers/gpu/drm/amd/amdkfd/kfd_chardev.c
index 90e6d9e335a5..29443419bbf0 100644
--- a/drivers/gpu/drm/amd/amdkfd/kfd_chardev.c
+++ b/drivers/gpu/drm/amd/amdkfd/kfd_chardev.c
@@ -1860,6 +1860,42 @@ static int kfd_ioctl_svm(struct file *filep, struct 
kfd_process *p, void *data)
 }
 #endif
 
+uint32_t get_process_num_bos(struct kfd_process *p)
+{
+   uint32_t num_of_bos = 0;
+   int i;
+
+   /* Run over all PDDs of the process */
+   for (i = 0; i < p->n_pdds; i++) {
+   struct kfd_process_device *pdd = p->pdds[i];
+   void *mem;
+   int id;
+
+   idr_for_each_entry(>alloc_idr, mem, id) {
+   struct kgd_mem *kgd_mem = (struct kgd_mem *)mem;
+
+   if ((uint64_t)kgd_mem->va > pdd->gpuvm_base)
+   num_of_bos++;
+   }
+   }
+   return num_of_bos;
+}
+
+static void criu_get_process_object_info(struct kfd_process *p,
+uint32_t *num_bos,
+uint64_t *objs_priv_size)
+{
+   uint64_t priv_size;
+
+   *num_bos = get_process_num_bos(p);
+
+   if (objs_priv_size) {
+   priv_size = sizeof(struct kfd_criu_process_priv_data);
+   priv_size += *num_bos * sizeof(struct kfd_criu_bo_priv_data);
+   *objs_priv_size = priv_size;
+   }
+}
+
 static int criu_checkpoint(struct file *filep,
   struct kfd_process *p,
   struct kfd_ioctl_criu_args *args)
@@ -1892,7 +1928,25 @@ static int criu_process_info(struct file *filep,
struct kfd_process *p,
struct kfd_ioctl_criu_args *args)
 {
-   return 0;
+   int ret = 0;
+
+   mutex_lock(>mutex);
+
+   if (!p->n_pdds) {
+   pr_err("No pdd for given process\n");
+   ret = -ENODEV;
+   goto err_unlock;
+   }
+
+   args->pid = task_pid_nr_ns(p->lead_thread,
+   task_active_pid_ns(p->lead_thread));
+
+   criu_get_process_object_info(p, >num_bos, >priv_data_size);
+
+   dev_dbg(kfd_device, "Num of bos:%u\n", args->num_bos);
+err_unlock:
+   mutex_unlock(>mutex);
+   return ret;
 }
 
 static int kfd_ioctl_criu(struct file *filep, struct kfd_process *p, void 
*data)
-- 
2.17.1



[Patch v5 01/24] x86/configs: CRIU update debug rock defconfig

2022-02-03 Thread Rajneesh Bhardwaj
 - Update debug config for Checkpoint-Restore (CR) support
 - Also include necessary options for CR with docker containers.

Reviewed-by: Felix Kuehling 
Signed-off-by: Rajneesh Bhardwaj 
---
 arch/x86/configs/rock-dbg_defconfig | 53 ++---
 1 file changed, 34 insertions(+), 19 deletions(-)

diff --git a/arch/x86/configs/rock-dbg_defconfig 
b/arch/x86/configs/rock-dbg_defconfig
index 4877da183599..bc2a34666c1d 100644
--- a/arch/x86/configs/rock-dbg_defconfig
+++ b/arch/x86/configs/rock-dbg_defconfig
@@ -249,6 +249,7 @@ CONFIG_KALLSYMS_ALL=y
 CONFIG_KALLSYMS_ABSOLUTE_PERCPU=y
 CONFIG_KALLSYMS_BASE_RELATIVE=y
 # CONFIG_USERFAULTFD is not set
+CONFIG_USERFAULTFD=y
 CONFIG_ARCH_HAS_MEMBARRIER_SYNC_CORE=y
 CONFIG_KCMP=y
 CONFIG_RSEQ=y
@@ -1015,6 +1016,11 @@ CONFIG_PACKET_DIAG=y
 CONFIG_UNIX=y
 CONFIG_UNIX_SCM=y
 CONFIG_UNIX_DIAG=y
+CONFIG_SMC_DIAG=y
+CONFIG_XDP_SOCKETS_DIAG=y
+CONFIG_INET_MPTCP_DIAG=y
+CONFIG_TIPC_DIAG=y
+CONFIG_VSOCKETS_DIAG=y
 # CONFIG_TLS is not set
 CONFIG_XFRM=y
 CONFIG_XFRM_ALGO=y
@@ -1052,15 +1058,17 @@ CONFIG_SYN_COOKIES=y
 # CONFIG_NET_IPVTI is not set
 # CONFIG_NET_FOU is not set
 # CONFIG_NET_FOU_IP_TUNNELS is not set
-# CONFIG_INET_AH is not set
-# CONFIG_INET_ESP is not set
-# CONFIG_INET_IPCOMP is not set
-CONFIG_INET_TUNNEL=y
-CONFIG_INET_DIAG=y
-CONFIG_INET_TCP_DIAG=y
-# CONFIG_INET_UDP_DIAG is not set
-# CONFIG_INET_RAW_DIAG is not set
-# CONFIG_INET_DIAG_DESTROY is not set
+CONFIG_INET_AH=m
+CONFIG_INET_ESP=m
+CONFIG_INET_IPCOMP=m
+CONFIG_INET_ESP_OFFLOAD=m
+CONFIG_INET_TUNNEL=m
+CONFIG_INET_XFRM_TUNNEL=m
+CONFIG_INET_DIAG=m
+CONFIG_INET_TCP_DIAG=m
+CONFIG_INET_UDP_DIAG=m
+CONFIG_INET_RAW_DIAG=m
+CONFIG_INET_DIAG_DESTROY=y
 CONFIG_TCP_CONG_ADVANCED=y
 # CONFIG_TCP_CONG_BIC is not set
 CONFIG_TCP_CONG_CUBIC=y
@@ -1085,12 +1093,14 @@ CONFIG_TCP_MD5SIG=y
 CONFIG_IPV6=y
 # CONFIG_IPV6_ROUTER_PREF is not set
 # CONFIG_IPV6_OPTIMISTIC_DAD is not set
-CONFIG_INET6_AH=y
-CONFIG_INET6_ESP=y
-# CONFIG_INET6_ESP_OFFLOAD is not set
-# CONFIG_INET6_ESPINTCP is not set
-# CONFIG_INET6_IPCOMP is not set
-# CONFIG_IPV6_MIP6 is not set
+CONFIG_INET6_AH=m
+CONFIG_INET6_ESP=m
+CONFIG_INET6_ESP_OFFLOAD=m
+CONFIG_INET6_IPCOMP=m
+CONFIG_IPV6_MIP6=m
+CONFIG_INET6_XFRM_TUNNEL=m
+CONFIG_INET_DCCP_DIAG=m
+CONFIG_INET_SCTP_DIAG=m
 # CONFIG_IPV6_ILA is not set
 # CONFIG_IPV6_VTI is not set
 CONFIG_IPV6_SIT=y
@@ -1146,8 +1156,13 @@ CONFIG_NF_CT_PROTO_UDPLITE=y
 # CONFIG_NF_CONNTRACK_SANE is not set
 # CONFIG_NF_CONNTRACK_SIP is not set
 # CONFIG_NF_CONNTRACK_TFTP is not set
-# CONFIG_NF_CT_NETLINK is not set
-# CONFIG_NF_CT_NETLINK_TIMEOUT is not set
+CONFIG_COMPAT_NETLINK_MESSAGES=y
+CONFIG_NF_CT_NETLINK=m
+CONFIG_NF_CT_NETLINK_TIMEOUT=m
+CONFIG_NF_CT_NETLINK_HELPER=m
+CONFIG_NETFILTER_NETLINK_GLUE_CT=y
+CONFIG_SCSI_NETLINK=y
+CONFIG_QUOTA_NETLINK_INTERFACE=y
 CONFIG_NF_NAT=m
 CONFIG_NF_NAT_REDIRECT=y
 CONFIG_NF_NAT_MASQUERADE=y
@@ -1992,7 +2007,7 @@ CONFIG_NETCONSOLE_DYNAMIC=y
 CONFIG_NETPOLL=y
 CONFIG_NET_POLL_CONTROLLER=y
 # CONFIG_RIONET is not set
-# CONFIG_TUN is not set
+CONFIG_TUN=y
 # CONFIG_TUN_VNET_CROSS_LE is not set
 CONFIG_VETH=y
 # CONFIG_NLMON is not set
@@ -3990,7 +4005,7 @@ CONFIG_MANDATORY_FILE_LOCKING=y
 CONFIG_FSNOTIFY=y
 CONFIG_DNOTIFY=y
 CONFIG_INOTIFY_USER=y
-# CONFIG_FANOTIFY is not set
+CONFIG_FANOTIFY=y
 CONFIG_QUOTA=y
 CONFIG_QUOTA_NETLINK_INTERFACE=y
 # CONFIG_PRINT_QUOTA_WARNING is not set
-- 
2.17.1



[Patch v5 00/24] CHECKPOINT RESTORE WITH ROCm

2022-02-03 Thread Rajneesh Bhardwaj
V5: Proposed IOCTL APIs for CRIU with consolidated feedback

CRIU is a user space tool which is very popular for container live
migration in datacentres. It can checkpoint a running application, save
its complete state, memory contents and all system resources to images
on disk which can be migrated to another m achine and restored later.
More information on CRIU can be found at https://criu.org/Main_Page

CRIU currently does not support Checkpoint / Restore with applications
that have devices files open so it cannot perform checkpoint and restore
on GPU devices which are very complex and have their own VRAM managed
privately. CRIU, however can support external devices by using a plugin
architecture. We feel that we are getting close to finalizing our IOCTL
APIs which were again changed since V3 for an improved modular design.

Our changes to CRIU user space  are can be obtained from here:
https://github.com/RadeonOpenCompute/criu/tree/amdgpu_rfc-211222

We have tested the following scenarios:
 - Checkpoint / Restore of a Pytorch (BERT) workload
 - kfdtests with queues and events
 - Gfx9 and Gfx10 based multi GPU test systems 
 - On baremetal and inside a docker container
 - Restoring on a different system

V1: Initial
V2: Addressed review comments
V3: Rebased on latest amd-staging-drm-next (5.15 based)
v4: New API design and basic support for SVM, however there is an
outstanding issue with SVM restore which is currently under debug and
hopefully that won't impact the ioctl APIs as SVMs are treated as
private data hidden from user space like queues and events with the new
approch.
V5: Fix the SVM related issues and finalize the APIs. 

David Yat Sin (9):
  drm/amdkfd: CRIU Implement KFD unpause operation
  drm/amdkfd: CRIU add queues support
  drm/amdkfd: CRIU restore queue ids
  drm/amdkfd: CRIU restore sdma id for queues
  drm/amdkfd: CRIU restore queue doorbell id
  drm/amdkfd: CRIU checkpoint and restore queue mqds
  drm/amdkfd: CRIU checkpoint and restore queue control stack
  drm/amdkfd: CRIU checkpoint and restore events
  drm/amdkfd: CRIU implement gpu_id remapping

Rajneesh Bhardwaj (15):
  x86/configs: CRIU update debug rock defconfig
  drm/amdkfd: CRIU Introduce Checkpoint-Restore APIs
  drm/amdkfd: CRIU Implement KFD process_info ioctl
  drm/amdkfd: CRIU Implement KFD checkpoint ioctl
  drm/amdkfd: CRIU Implement KFD restore ioctl
  drm/amdkfd: CRIU Implement KFD resume ioctl
  drm/amdkfd: CRIU export BOs as prime dmabuf objects
  drm/amdkfd: CRIU checkpoint and restore xnack mode
  drm/amdkfd: CRIU allow external mm for svm ranges
  drm/amdkfd: use user_gpu_id for svm ranges
  drm/amdkfd: CRIU Discover svm ranges
  drm/amdkfd: CRIU Save Shared Virtual Memory ranges
  drm/amdkfd: CRIU prepare for svm resume
  drm/amdkfd: CRIU resume shared virtual memory ranges
  drm/amdkfd: Bump up KFD API version for CRIU

 arch/x86/configs/rock-dbg_defconfig   |   53 +-
 drivers/gpu/drm/amd/amdgpu/amdgpu_amdkfd.h|7 +-
 .../gpu/drm/amd/amdgpu/amdgpu_amdkfd_gpuvm.c  |   64 +-
 drivers/gpu/drm/amd/amdgpu/amdgpu_ttm.c   |   20 +
 drivers/gpu/drm/amd/amdgpu/amdgpu_ttm.h   |2 +
 drivers/gpu/drm/amd/amdkfd/kfd_chardev.c  | 1471 ++---
 drivers/gpu/drm/amd/amdkfd/kfd_dbgdev.c   |2 +-
 .../drm/amd/amdkfd/kfd_device_queue_manager.c |  185 ++-
 .../drm/amd/amdkfd/kfd_device_queue_manager.h |   16 +-
 drivers/gpu/drm/amd/amdkfd/kfd_events.c   |  313 +++-
 drivers/gpu/drm/amd/amdkfd/kfd_mqd_manager.h  |   14 +
 .../gpu/drm/amd/amdkfd/kfd_mqd_manager_cik.c  |   75 +
 .../gpu/drm/amd/amdkfd/kfd_mqd_manager_v10.c  |   77 +
 .../gpu/drm/amd/amdkfd/kfd_mqd_manager_v9.c   |   92 ++
 .../gpu/drm/amd/amdkfd/kfd_mqd_manager_vi.c   |   84 +
 drivers/gpu/drm/amd/amdkfd/kfd_priv.h |  160 +-
 drivers/gpu/drm/amd/amdkfd/kfd_process.c  |   72 +-
 .../amd/amdkfd/kfd_process_queue_manager.c|  372 -
 drivers/gpu/drm/amd/amdkfd/kfd_svm.c  |  331 +++-
 drivers/gpu/drm/amd/amdkfd/kfd_svm.h  |   39 +
 include/uapi/linux/kfd_ioctl.h|   84 +-
 21 files changed, 3193 insertions(+), 340 deletions(-)

-- 
2.17.1