Not a set of patches.
I suggest you choose the branch in your repo you want to merge.
Rebase it on top of master of the official libhsakmt repo.
Create a pull request using the git pull-request command and send it to me.
In the pull-request, describe what are the changes, new features, etc.
Then I
Signed-off-by: Monk Liu
---
drivers/gpu/drm/amd/amdgpu/mxgpu_ai.c | 14 +++---
1 file changed, 11 insertions(+), 3 deletions(-)
diff --git a/drivers/gpu/drm/amd/amdgpu/mxgpu_ai.c
b/drivers/gpu/drm/amd/amdgpu/mxgpu_ai.c
index c32d0b0..d31259e 100644
--- a/drivers/gpu/drm/amd/amdgpu/mxgpu
1,no sriov check since gpu recover is unified
2,need CPU_ACCESS_REQUIRED flag for VRAM if SRIOV
because otherwise after following PIN the first allocated
VRAM bo is wasted due to some TTM mgr reason.
Change-Id: I4d029f2da8bb463942c7861d3e52f309bdba9576
Signed-off-by: Monk Liu
---
drivers/gpu/drm
*** job skipping logic in scheduler part is re-implemented ***
Monk Liu (7):
amd/scheduler:imple job skip feature(v3)
drm/amdgpu:implement new GPU recover(v3)
drm/amdgpu:cleanup in_sriov_reset and lock_reset
drm/amdgpu:cleanup ucode_init_bo
drm/amdgpu:block kms open during gpu_reset
d
for SR-IOV when doing gpu reset this routine shouldn't do
resource allocating otherwise memory leak
Change-Id: I25da3a5b475196c75c7e639adc40751754625968
Signed-off-by: Monk Liu
---
drivers/gpu/drm/amd/amdgpu/amdgpu_psp.c | 20
1 file changed, 12 insertions(+), 8 deletions(-)
since now gpu reset is unified with gpu_recover
for both bare-metal and SR-IOV:
1)rename in_sriov_reset to in_gpu_reset
2)move lock_reset from adev->virt to adev
Change-Id: I9f4dbab9a4c916fbc156f669824d15ddcd0f2322
Signed-off-by: Monk Liu
---
drivers/gpu/drm/amd/amdgpu/amdgpu.h| 3 ++-
jobs are skipped under two cases
1)when the entity behind this job marked guilty, the job
poped from this entity's queue will be dropped in sched_main loop.
2)in job_recovery(), skip the scheduling job if its karma detected
above limit, and also skipped as well for other jobs sharing the
same fenc
1,new imple names amdgpu_gpu_recover which gives more hint
on what it does compared with gpu_reset
2,gpu_recover unify bare-metal and SR-IOV, only the asic reset
part is implemented differently
3,gpu_recover will increase hang job karma and mark its entity/context
as guilty if exceeds limit
V2:
> I can't see any difference between the handling of existing VMs and new
> created ones.
I know, for existing VMs we still have similar problems, I'm not saying this
patch can save existing VM problem ...
My eldest patch series actually use a way can 100% avoid such problem: use RW
mlock on dr
On 2017年10月27日 22:43, Christian König wrote:
From: Christian König
Just allocate the GART space and fill it.
This prevents forcing the BO to be idle.
v2: don't unbind/bind at all, just fill the allocated GART space
Could you explain what 'unbind/bind' is? My old understanding is 'bind'
is
On 2017年10月27日 22:43, Christian König wrote:
From: Christian König
The GTT manager handles the GART address space anyway, so it is
completely pointless to keep the same information around twice.
Signed-off-by: Christian König
Good cleanup, Reviewed-by: Chunming Zhou
---
drivers/gpu/dr
On 2017年10月27日 22:43, Christian König wrote:
From: Christian König
Rename amdgpu_gtt_mgr_is_allocated() to amdgpu_gtt_mgr_has_gart_addr() and use
that instead.
v2: rename the function as well.
Signed-off-by: Christian König
Reviewed-by: Chunming Zhou
---
drivers/gpu/drm/amd/amdgpu/a
12 matches
Mail list logo