On Tue, Apr 11, 2023 at 08:35:48PM +0300, Dmitry Baryshkov wrote:
> On Tue, 11 Apr 2023 at 20:13, Rob Clark wrote:
> >
> > On Tue, Apr 11, 2023 at 9:53 AM Daniel Vetter wrote:
> > >
> > > On Tue, Apr 11, 2023 at 09:47:32AM -0700, Rob Clark wrote:
> > > &
On Tue, Apr 11, 2023 at 11:18:29AM -0600, Jeffrey Hugo wrote:
> On 4/11/2023 10:31 AM, Daniel Vetter wrote:
> > On Tue, Apr 11, 2023 at 09:29:27AM -0600, Jeffrey Hugo wrote:
> > > On 4/11/2023 9:26 AM, Jeffrey Hugo wrote:
> > > > On 4/11/2023 9:13 AM, Greg KH wrote:
or helpers that will be developed or modified.
>
> Cc: Dave Airlie
> Cc: Daniel Vetter
> Cc: Oded Gabbay
> Signed-off-by: Rodrigo Vivi
> Signed-off-by: Francois Dugast
> Signed-off-by: Luis Strano
> Signed-off-by: Matthew Brost
> Signed-off-by: Thom
ion/gpu/drm-usage-stats.rst | 21 +++
> > drivers/gpu/drm/drm_file.c| 79 +++
> > drivers/gpu/drm/msm/msm_drv.c | 25 -
> > drivers/gpu/drm/msm/msm_gpu.c | 2 -
> > include/drm/drm_file.h| 10
> > 5 files changed, 134 insertions(+), 3 deletions(-)
> >
> > --
> > 2.39.2
> >
--
Daniel Vetter
Software Engineer, Intel Corporation
http://blog.ffwll.ch
gt; > drivers/gpu/drm/i915/gt/uc/intel_guc_ct.c | 53 +
> > > drivers/gpu/drm/i915/gt/uc/intel_guc_ct.h | 6 +
> > > drivers/gpu/drm/i915/i915_gpu_error.c | 130 ++
> > > drivers/gpu/drm/i915/i915_gpu_error.h | 8 ++
> > > 4 files changed, 197 insertions(+)
> > >
> > > --
> > > 2.39.1
> > >
>
--
Daniel Vetter
Software Engineer, Intel Corporation
http://blog.ffwll.ch
On Tue, Apr 11, 2023 at 09:29:27AM -0600, Jeffrey Hugo wrote:
> On 4/11/2023 9:26 AM, Jeffrey Hugo wrote:
> > On 4/11/2023 9:13 AM, Greg KH wrote:
> > > On Tue, Apr 11, 2023 at 09:08:39AM -0600, Jeffrey Hugo wrote:
> > > > On 4/11/2023 9:01 AM, Daniel Vetter wrote:
On Tue, Apr 11, 2023 at 08:02:09AM -0700, Rob Clark wrote:
> On Tue, Apr 11, 2023 at 3:43 AM Daniel Vetter wrote:
> >
> > On Mon, Apr 10, 2023 at 02:06:06PM -0700, Rob Clark wrote:
> > > From: Rob Clark
> > >
> > > Add a helper to dump memory stats to
On Tue, Apr 11, 2023 at 12:40:28PM +0200, Greg KH wrote:
> On Tue, Apr 11, 2023 at 11:55:20AM +0200, Daniel Vetter wrote:
> > On Tue, Apr 11, 2023 at 02:38:12PM +1000, Stephen Rothwell wrote:
> > > Hi all,
> > >
> > > After merging the driver-core tr
On Tue, Apr 11, 2023 at 07:55:33AM -0700, Rob Clark wrote:
> On Tue, Apr 11, 2023 at 3:27 AM Daniel Vetter wrote:
> > > Konrad Dybcio (18):
> > > drm/msm/adreno: Use OPP for every GPU generation
> >
> > This had a minor conflict with refactoring from drm-mis
ll set the gem buffer size here.
If otoh we need this too, then there's a few more places that need to be
fixed.
> + screen_size = sizes->surface_height * buffer->fb->pitches[0];
> +
> screen_buffer = vzalloc(screen_size);
> if (!screen_buffer) {
> ret = -ENOMEM;
Cheers, Daniel
> --
> 2.25.1
>
--
Daniel Vetter
Software Engineer, Intel Corporation
http://blog.ffwll.ch
TC, NULL if disabled. Do not this write directly,
> + * Currently bound CRTC, NULL if disabled. Do not write this directly,
>* use drm_atomic_set_crtc_for_plane()
>*/
> struct drm_crtc *crtc;
> --
> 2.25.1
>
--
Daniel Vetter
Software Engineer, Intel Corporation
http://blog.ffwll.ch
On Fri, Apr 07, 2023 at 10:54:00PM +0200, Helge Deller wrote:
> On 4/6/23 15:21, Thomas Zimmermann wrote:
> > From: Daniel Vetter
> >
> > Since vgaarb has been promoted to be a core piece of the pci subsystem
> > we don't have to open code random guesses anymore
On Tue, Apr 11, 2023 at 04:03:24PM +0200, Javier Martinez Canillas wrote:
> Daniel Vetter writes:
>
> > This is an oversight from dc5bdb68b5b3 ("drm/fb-helper: Fix vt
> > restore") - I failed to realize that nasty userspace could set this.
> >
> > It
> > >
> > > > > > > > Idea is to be able to communicate to the submission backend
> > > > > > > > with in
> > > band
> > > > > > > > (relative to main execution function) messages. Messages are
> > > > > > > > backend
> > > > > > > > defined and flexable enough for any use case. In Xe we use these
> > > > > > > > messages to clean up entites, set properties for entites, and
> > > > > > > > suspend /
> > > > > > > > resume execution of an entity [5]. I suspect other driver can
> > > > > > > > leverage
> > > > > > > > this messaging concept too as it a convenient way to avoid
> > > > > > > > races in the
> > > > > > > > backend.
> > > > > > > >
> > > > > > > > - Support for using TDR for all error paths of a scheduler /
> > > > > > > > entity
> > > > > > > >
> > > > > > > > Fix a few races / bugs, add function to dynamically set the TDR
> > > > > > > > timeout.
> > > > > > > >
> > > > > > > > - Annotate dma-fences for long running workloads.
> > > > > > > >
> > > > > > > > The idea here is to use dma-fences only as sync points within
> > > > > > > > the
> > > > > > > > scheduler and never export them for long running workloads. By
> > > > > > > > annotating these fences as long running we ensure that these
> > > > > > > > dma-
> > > fences
> > > > > > > > are never used in a way that breaks the dma-fence rules. A
> > > > > > > > benefit of
> > > > > > > > thus approach is the scheduler can still safely flow control the
> > > > > > > > execution ring buffer via the job limit without breaking the
> > > > > > > > dma-fence
> > > > > > > > rules.
> > > > > > > >
> > > > > > > > Again this a first draft and looking forward to feedback.
> > > > > > > >
> > > > > > > > Enjoy - Matt
> > > > > > > >
> > > > > > > > [1] https://gitlab.freedesktop.org/drm/xe/kernel
> > > > > > > > [2] https://patchwork.freedesktop.org/series/112188/
> > > > > > > > [3] https://patchwork.freedesktop.org/series/114772/
> > > > > > > > [4]
> > > https://patchwork.freedesktop.org/patch/515854/?series=112188&rev=1
> > > > > > > > [5] https://gitlab.freedesktop.org/drm/xe/kernel/-/blob/drm-xe-
> > > > > > > > next/drivers/gpu/drm/xe/xe_guc_submit.c#L1031
> > > > > > > >
> > > > > > > > Matthew Brost (8):
> > > > > > > > drm/sched: Convert drm scheduler to use a work queue rather
> > > > > > > > than
> > > > > > > > kthread
> > > > > > > > drm/sched: Move schedule policy to scheduler / entity
> > > > > > > > drm/sched: Add DRM_SCHED_POLICY_SINGLE_ENTITY scheduling
> > > policy
> > > > > > > > drm/sched: Add generic scheduler message interface
> > > > > > > > drm/sched: Start run wq before TDR in drm_sched_start
> > > > > > > > drm/sched: Submit job before starting TDR
> > > > > > > > drm/sched: Add helper to set TDR timeout
> > > > > > > > drm/syncobj: Warn on long running dma-fences
> > > > > > > >
> > > > > > > > Thomas Hellström (2):
> > > > > > > > dma-buf/dma-fence: Introduce long-running completion fences
> > > > > > > > drm/sched: Support long-running sched entities
> > > > > > > >
> > > > > > > >drivers/dma-buf/dma-fence.c | 142 +++---
> > > > > > > >drivers/dma-buf/dma-resv.c | 5 +
> > > > > > > >drivers/gpu/drm/amd/amdgpu/amdgpu_debugfs.c | 14 +-
> > > > > > > >drivers/gpu/drm/amd/amdgpu/amdgpu_device.c | 15 +-
> > > > > > > >drivers/gpu/drm/drm_syncobj.c | 5 +-
> > > > > > > >drivers/gpu/drm/etnaviv/etnaviv_sched.c | 5 +-
> > > > > > > >drivers/gpu/drm/lima/lima_sched.c | 5 +-
> > > > > > > >drivers/gpu/drm/msm/adreno/adreno_device.c | 6 +-
> > > > > > > >drivers/gpu/drm/msm/msm_ringbuffer.c| 5 +-
> > > > > > > >drivers/gpu/drm/panfrost/panfrost_job.c | 5 +-
> > > > > > > >drivers/gpu/drm/scheduler/sched_entity.c| 127 +++--
> > > > > > > >drivers/gpu/drm/scheduler/sched_fence.c | 6 +-
> > > > > > > >drivers/gpu/drm/scheduler/sched_main.c | 278
> > > > > > > > +++--
> > > ---
> > > > > > > >drivers/gpu/drm/v3d/v3d_sched.c | 25 +-
> > > > > > > >include/drm/gpu_scheduler.h | 130 +++--
> > > > > > > >include/linux/dma-fence.h | 60 -
> > > > > > > >16 files changed, 649 insertions(+), 184 deletions(-)
> > > > > > > >
> > > > > > > > --
> > > > > > > > 2.34.1
>
--
Daniel Vetter
Software Engineer, Intel Corporation
http://blog.ffwll.ch
how current
upstream drivers use it"
I don't think there's a lot needed in terms of drm/sched driver api
rework, but I think it's also pretty clearly not ever going to get
anywhere with just nothing at all. Writing an entire new scheduler lib
instead of at least trying what minimal semantic changes (instead of just
a pile of hacks without even doc changes for the new rules) does not sound
like a good idea to me :-)
> If you want a workload to try to see if you run into any of these things,
> running and killing lots of things in parallel is a good thing to try (mess
> with the numbers and let it run for a while to see if you can hit any corner
> cases):
>
> while true; do for i in $(seq 1 10); do timeout -k 0.01 0.05 glxgears &
> done; sleep 0.1; done
Maybe xe gets away with this due to synchronously killing everything
related to a ctx, but yeah I'd expect this to go boom in fun ways.
-Daniel
--
Daniel Vetter
Software Engineer, Intel Corporation
http://blog.ffwll.ch
On Tue, Apr 11, 2023 at 02:11:18PM +0200, Christian König wrote:
> Am 11.04.23 um 11:51 schrieb Daniel Vetter:
> > On Tue, Apr 04, 2023 at 10:06:49PM +0200, Thomas Hellström wrote:
> > > When swapping out, we will split multi-order pages both in order to
> > > move them
On Tue, Apr 04, 2023 at 09:39:34PM +0200, Daniel Vetter wrote:
> This is an oversight from dc5bdb68b5b3 ("drm/fb-helper: Fix vt
> restore") - I failed to realize that nasty userspace could set this.
>
> It's not pretty to mix up kernel-internal and userspace uapi flags
--git a/include/drm/drm_file.h b/include/drm/drm_file.h
> index 0d1f853092ab..7bd8a1374f39 100644
> --- a/include/drm/drm_file.h
> +++ b/include/drm/drm_file.h
> @@ -41,6 +41,7 @@
> struct dma_fence;
> struct drm_file;
> struct drm_device;
> +struct drm_printer;
> struct device
t
maybe with proper color rendering this is changing :-)
> If this is a module parameter instead of a KMS property, what purpose
> does this achieve? What is the use-case? Just trying to understand the
> motivation here.
Just a step to get things going, occasionally that's needed ...
-Daniel
--
Daniel Vetter
Software Engineer, Intel Corporation
http://blog.ffwll.ch
thdr.c | 370 ++
> drivers/gpu/drm/mediatek/mtk_ethdr.h | 25 +
> drivers/gpu/drm/mediatek/mtk_mdp_rdma.c| 24 +
> 19 files changed, 1858 insertions(+), 319 deletions(-)
> create mode 100644
> Documentation/devicetree/
return formats;
> +}
> +
> +size_t mtk_mdp_rdma_get_num_formats(struct device *dev)
> +{
> + return ARRAY_SIZE(formats);
> +}
> +
> int mtk_mdp_rdma_clk_enable(struct device *dev)
> {
> struct mtk_mdp_rdma *rdma = dev_get_drvdata(dev);
> --
> 2.18.0
>
--
Daniel Vetter
Software Engineer, Intel Corporation
http://blog.ffwll.ch
.h| 17 +
> include/linux/dma-fence.h | 22 +
> include/linux/dma-resv.h |2 +
> include/uapi/drm/msm_drm.h | 18 +-
> include/uapi/linux/sync_file.h | 37 +-
> 125 files changed, 6659 insertions(+), 4519 deletions(-)
> create mode 100644
> Documentation/devicetree/bindings/display/msm/qcom,sm8550-dpu.yaml
> create mode 100644
> Documentation/devicetree/bindings/display/msm/qcom,sm8550-mdss.yaml
> create mode 100644 drivers/gpu/drm/msm/disp/dpu1/catalog/dpu_3_0_msm8998.h
> create mode 100644 drivers/gpu/drm/msm/disp/dpu1/catalog/dpu_4_0_sdm845.h
> create mode 100644 drivers/gpu/drm/msm/disp/dpu1/catalog/dpu_5_0_sm8150.h
> create mode 100644 drivers/gpu/drm/msm/disp/dpu1/catalog/dpu_5_1_sc8180x.h
> create mode 100644 drivers/gpu/drm/msm/disp/dpu1/catalog/dpu_6_0_sm8250.h
> create mode 100644 drivers/gpu/drm/msm/disp/dpu1/catalog/dpu_6_2_sc7180.h
> create mode 100644 drivers/gpu/drm/msm/disp/dpu1/catalog/dpu_6_3_sm6115.h
> create mode 100644 drivers/gpu/drm/msm/disp/dpu1/catalog/dpu_6_5_qcm2290.h
> create mode 100644 drivers/gpu/drm/msm/disp/dpu1/catalog/dpu_7_0_sm8350.h
> create mode 100644 drivers/gpu/drm/msm/disp/dpu1/catalog/dpu_7_2_sc7280.h
> create mode 100644 drivers/gpu/drm/msm/disp/dpu1/catalog/dpu_8_0_sc8280xp.h
> create mode 100644 drivers/gpu/drm/msm/disp/dpu1/catalog/dpu_8_1_sm8450.h
> create mode 100644 drivers/gpu/drm/msm/disp/dpu1/catalog/dpu_9_0_sm8550.h
--
Daniel Vetter
Software Engineer, Intel Corporation
http://blog.ffwll.ch
drivers/accel/habanalabs/common/sysfs.c| 6 +-
> drivers/accel/habanalabs/gaudi/gaudi.c | 86 +
> drivers/accel/habanalabs/gaudi/gaudiP.h| 15 -
> drivers/accel/habanalabs/gaudi2/gaudi2.c | 347
> +++--
> drivers
b/drivers/accel/qaic/mhi_qaic_ctrl.c
> @@ -541,7 +541,7 @@ int mhi_qaic_ctrl_init(void)
> return ret;
>
> mqc_dev_major = ret;
> - mqc_dev_class = class_create(THIS_MODULE, MHI_QAIC_CTRL_DRIVER_NAME);
> + mqc_dev_class = class_create(MHI_QAIC_CTRL_DRIVER_NAME);
> if (IS_ERR(mqc_dev_class)) {
> ret = PTR_ERR(mqc_dev_class);
> goto unregister_chrdev;
> --
> 2.39.2
>
> --
> Cheers,
> Stephen Rothwell
--
Daniel Vetter
Software Engineer, Intel Corporation
http://blog.ffwll.ch
st);
>
> - for (i = 0; i < MAX_ORDER; ++i) {
> + for (i = 0; i < TTM_DIM_ORDER; ++i) {
> ttm_pool_type_init(&global_write_combined[i], NULL,
> ttm_write_combined, i);
> ttm_pool_type_init(&global_uncached[i], NULL, ttm_uncached, i);
> @@ -790,7 +798,7 @@ void ttm_pool_mgr_fini(void)
> {
> unsigned int i;
>
> - for (i = 0; i < MAX_ORDER; ++i) {
> + for (i = 0; i < TTM_DIM_ORDER; ++i) {
> ttm_pool_type_fini(&global_write_combined[i]);
> ttm_pool_type_fini(&global_uncached[i]);
>
> --
> 2.39.2
>
--
Daniel Vetter
Software Engineer, Intel Corporation
http://blog.ffwll.ch
On Thu, Apr 06, 2023 at 03:06:56PM -0300, Maíra Canal wrote:
> On 4/6/23 14:28, Daniel Vetter wrote:
> > On Thu, 6 Apr 2023 at 19:20, Maíra Canal wrote:
> > >
> > > After commit 8ba1648567e2 ("drm: vkms: Refactor the plane composer to
> > > accept n
000
> R10: R11: 0246 R12:
> R13: 7ffd0e5e571f R14: 7f5372809300 R15: 00022000
>
>
>
> ---
> This report is generated by a bot. It may contain errors.
> See https://goo.gl/tpsmEJ for more information abou
On Thu, Apr 06, 2023 at 07:42:44PM +0200, Daniel Vetter wrote:
> But I think there's tools to make sure we don't dig a complete hole with
> these it sounds like. I guess another topic for pestering the rust folks.
Or to put it very bluntly: Could we make Arc at least runtime e
On Fri, Apr 07, 2023 at 02:11:32AM +0900, Asahi Lina wrote:
> On 07/04/2023 00.30, Daniel Vetter wrote:
> > On Thu, Apr 06, 2023 at 11:43:19PM +0900, Asahi Lina wrote:
> > > On 06/04/2023 22.37, Daniel Vetter wrote:
> > > > On Thu, Apr 06, 2023 at 09:21:47PM +0900,
kms/vkms_drv.h
> index 4a248567efb2..4bc2e6a6d219 100644
> --- a/drivers/gpu/drm/vkms/vkms_drv.h
> +++ b/drivers/gpu/drm/vkms/vkms_drv.h
> @@ -113,6 +113,7 @@ struct vkms_config {
> bool writeback;
> bool cursor;
> bool overlay;
> + unsigned long *background_color;
> /* only set when instantiated */
> struct vkms_device *dev;
> };
> @@ -127,6 +128,9 @@ struct vkms_device {
> #define drm_crtc_to_vkms_output(target) \
> container_of(target, struct vkms_output, crtc)
>
> +#define vkms_output_to_vkms_device(target) \
> + container_of(target, struct vkms_device, output)
> +
> #define drm_device_to_vkms_device(target) \
> container_of(target, struct vkms_device, drm)
>
> --
> 2.39.2
>
--
Daniel Vetter
Software Engineer, Intel Corporation
http://blog.ffwll.ch
On Thu, 6 Apr 2023 at 18:58, Matthew Brost wrote:
>
> On Thu, Apr 06, 2023 at 08:32:59AM +0200, Daniel Vetter wrote:
> > On Wed, Apr 05, 2023 at 11:58:44PM +, Matthew Brost wrote:
> > > On Wed, Apr 05, 2023 at 03:09:08PM +0200, Daniel Vetter wrote:
> > > > O
ap, nouveau color depth
Boris Brezillon (1):
drm/panfrost: Fix the panfrost_mmu_map_fault_addr() error path
Daniel Vetter (2):
Merge tag 'drm-intel-fixes-2023-04-05' of
git://anongit.freedesktop.org/drm/drm-intel into drm-fixes
Merge tag 'drm-misc-fixes-2023-04
>base.resv, job->done_fence,
> DMA_RESV_USAGE_WRITE);
> + dma_resv_add_fence(bo->base.resv, job->done_fence,
> DMA_RESV_USAGE_WRITE);
>
> unlock_reservations:
> - drm_gem_unlock_reservations((struct drm_gem_object **)job->bos,
> buf_count, &acquire_ctx);
> + drm_gem_unlock_reservations((struct drm_gem_object **)job->bos, 1,
> &acquire_ctx);
>
> wmb(); /* Flush write combining buffers */
>
> --
> 2.25.1
>
--
Daniel Vetter
Software Engineer, Intel Corporation
http://blog.ffwll.ch
s(+), 2888 deletions(-)
> delete mode 100644 drivers/gpu/drm/i915/Kconfig.unstable
> create mode 100644 drivers/gpu/drm/i915/display/intel_dp_aux_regs.h
> create mode 100644 drivers/gpu/drm/i915/display/intel_dsb_regs.h
> create mode 100644 drivers/gpu/drm/i915/display/intel_fdi_regs.h
> create mode 100644 drivers/gpu/drm/i915/display/intel_pps_regs.h
> create mode 100644 drivers/gpu/drm/i915/display/intel_psr_regs.h
> create mode 100644 drivers/gpu/drm/i915/display/intel_tv_regs.h
> create mode 100644 drivers/gpu/drm/i915/display/skl_watermark_regs.h
> create mode 100644 drivers/gpu/drm/i915/intel_clock_gating.c
> create mode 100644 drivers/gpu/drm/i915/intel_clock_gating.h
> delete mode 100644 drivers/gpu/drm/i915/intel_pm.c
> delete mode 100644 drivers/gpu/drm/i915/intel_pm.h
--
Daniel Vetter
Software Engineer, Intel Corporation
http://blog.ffwll.ch
On Thu, 6 Apr 2023 at 17:59, Daniel Vetter wrote:
>
> On Thu, Apr 06, 2023 at 05:33:18PM +0200, Christian König wrote:
> > Am 06.04.23 um 17:24 schrieb Lucas Stach:
> > > Am Donnerstag, dem 06.04.2023 um 16:21 +0200 schrieb Christian König:
> > > > Am 06.0
On Fri, Apr 07, 2023 at 12:53:47AM +0900, Asahi Lina wrote:
> On 06/04/2023 23.15, Daniel Vetter wrote:
> > On Tue, Mar 07, 2023 at 11:25:32PM +0900, Asahi Lina wrote:
> > > drm_mm provides a simple range allocator, useful for managing virtual
> > > address ranges. Add a
rd bounds on when a job will get inactive, at least it's not
> > > > unbounded. On a crash/fault the job will be removed from the hardware
> > > > pretty soon.
> > > >
> > > > Well behaved jobs after a application shutdown might take a little
> > > > longer, but I don't really see the new problem with keeping the entity
> > > > alive? As long as a job is active on the hardware, we can't throw out
> > > > the VM or BOs, no difference whether the entity is kept alive or not.
> > > Exactly that's the problem. VM & BOs are dropped as soon as the process
> > > is destroyed, we *don't* wait for the hw to finish before doing so.
> > >
> > > Just the backing store managed by all the house keeping objects isn't
> > > freed until the hw is idle preventing a crash or accessing freed memory.
> > >
> > > This behavior is rather important for the OOM killer since we need to be
> > > able to tear down the process as fast as possible in that case.
> > >
> > Are you talking about the dropping of pending jobs in
> > drm_sched_entity_kill? I'm certainly not trying to change that in any
> > way. Those aren't put onto the hardware yet, so we can always safely
> > drop them and do so as fast as possible.
> >
> > What I'm concerned about are the jobs that are already scheduled on the
> > HW. At least with Vivante hardware there is no race free way to get rid
> > of jobs once they are put on the ring. So whatever the scheduler or DRM
> > core is doing, we have to hold on to the BOs and GPU memory management
> > structures to keep the hardware from operating on freed memory.
> >
> > That's already a lot of memory, so I don't really see the issue with
> > keeping the entity around in a quiescent state until all the currently
> > queued jobs have left the HW.
> >
> > > Changing that is possible, but that's quite a huge change I'm not really
> > > willing to do just for tracking the time spend.
> > >
> > Yea, it's a big change and whether it's a good idea really depends on
> > what we a gaining from it. You seem to see quite low value in "just
> > tracking the time spent" and that might be true, but it also forces all
> > drivers that want to implement fdinfo to roll their own time tracking.
> > I would rather see more of this moved to the scheduler and thus shared
> > between drivers.
>
> That's generally a good idea, but if that means that we need to restructure
> the whole entity handling then I would object. That's simply not worth it
> when we can implement it differently.
>
> What we could do is to keep the submitted fences around in the entity.
> Similar to the tracking amdgpu does, see struct amdgpu_ctx_entity.
>
> This way the entity doesn't needs to stay around after it delivered the job
> to the hw.
I've done a _very_ cursory look, but from that the design seems to be that
we only keep something very small around to avoid oopsing (just
drm_sched_entity) and the overall gpu ctx goes away synchronously (more or
less) when userspace destroys it.
And then the actual fdinfo reporting or any reporting is a pure pull model
where you come from drm_file -> gpu ctx -> amdgpu_ctx_entity and have
borrowed references by holding enough locks.
The push model, where the scheduler job actively pushes the stats all the
way to the userspace/uapi ctx object is imo impossible to get right.
i915-gem tried a few times, and solutions start with sprinkling rcu
everywhere and only get worse. Imo absolute no-go.
If I understand this right then yes I think extracting the
amgpu_ctx_entity split into a bit of helper. Might want to extract the
context xarray and lookup too and put it into drm_file, including the
amdgpu_ctx_mgr->lock and iterating over stats for fdinfo maybe too to make
this really worth it.
Of all the failed attempts I've seen for exposing ctx stats this (amdgpu
push model) is imo the only solid one.
Cheers, Daniel
--
Daniel Vetter
Software Engineer, Intel Corporation
http://blog.ffwll.ch
On Thu, Apr 06, 2023 at 05:28:59PM +0200, Miguel Ojeda wrote:
> On Thu, Apr 6, 2023 at 4:15 PM Daniel Vetter wrote:
> >
> > Documentation:
> >
> > In drm we try to document all the interfaces that drivers use with formal
> > docs. Yes there's some areas tha
On Thu, Apr 06, 2023 at 11:43:19PM +0900, Asahi Lina wrote:
> On 06/04/2023 22.37, Daniel Vetter wrote:
> > On Thu, Apr 06, 2023 at 09:21:47PM +0900, Asahi Lina wrote:
> > > On 06/04/2023 19.09, Daniel Vetter wrote:
> > > > On Thu, Apr 06, 2023 at 06:05:11PM +0900,
ned,
> +})?;
> +
> +mm_node.node.start = start;
> + mm_node.node.size = size;
> +mm_node.node.color = color as core::ffi::c_ulong;
> +
> +let guard = self.mm.lock();
> +// SAFETY: We hold the lock and all pointers are valid.
> +to_result(unsafe { bindings::drm_mm_reserve_node(guard.0.get(), &mut
> mm_node.node) })?;
> +
> +mm_node.valid = true;
> +
> +Ok(Pin::from(mm_node))
> +}
> +
> +/// Operate on the inner user type `A`, taking the allocator lock
> +pub fn with_inner(&self, cb: impl FnOnce(&mut A) -> RetVal) ->
> RetVal {
> +let mut guard = self.mm.lock();
> +cb(&mut guard.1)
> +}
> +}
> +
> +impl, T> Drop for MmInner {
> +fn drop(&mut self) {
> +// SAFETY: If the MmInner is dropped then all nodes are gone (since
> they hold references),
> +// so it is safe to tear down the allocator.
> +unsafe {
> +bindings::drm_mm_takedown(self.0.get());
> +}
> +}
> +}
> +
> +// MmInner is safely Send if the AllocInner user type is Send.
> +unsafe impl, T> Send for MmInner {}
> diff --git a/rust/kernel/drm/mod.rs b/rust/kernel/drm/mod.rs
> index c44760a1332f..73fab2dee3af 100644
> --- a/rust/kernel/drm/mod.rs
> +++ b/rust/kernel/drm/mod.rs
> @@ -7,3 +7,4 @@ pub mod drv;
> pub mod file;
> pub mod gem;
> pub mod ioctl;
> +pub mod mm;
>
> --
> 2.35.1
>
--
Daniel Vetter
Software Engineer, Intel Corporation
http://blog.ffwll.ch
On Thu, Apr 06, 2023 at 02:19:20PM +0200, Lucas Stach wrote:
> Am Donnerstag, dem 06.04.2023 um 14:09 +0200 schrieb Daniel Vetter:
> > On Thu, Apr 06, 2023 at 12:45:12PM +0200, Lucas Stach wrote:
> > > Am Donnerstag, dem 06.04.2023 um 10:27 +0200 schrieb Daniel Vetter:
> >
On Thu, Apr 06, 2023 at 04:18:46PM +0300, Mikko Perttunen wrote:
> On 4/6/23 16:09, Daniel Vetter wrote:
> > On Thu, Apr 06, 2023 at 02:14:04PM +0200, Thierry Reding wrote:
> > > Hi Dave, Daniel,
> > >
> > > The following changes since commit
> >
On Thu, Apr 06, 2023 at 10:32:29PM +0900, Asahi Lina wrote:
> On 06/04/2023 20.25, Daniel Vetter wrote:
> > On Thu, Apr 06, 2023 at 02:02:55PM +0900, Asahi Lina wrote:
> > > On 05/04/2023 23.44, Daniel Vetter wrote:
> > > > On Tue, Mar 07, 2023 at 11:2
On Thu, Apr 06, 2023 at 10:15:56PM +0900, Asahi Lina wrote:
> On 06/04/2023 20.55, Daniel Vetter wrote:
> > On Thu, Apr 06, 2023 at 01:44:22PM +0900, Asahi Lina wrote:
> > > On 05/04/2023 23.37, Daniel Vetter wrote:
> > > > On Tue, Mar 07, 2023 at 11:2
On Thu, Apr 06, 2023 at 09:21:47PM +0900, Asahi Lina wrote:
> On 06/04/2023 19.09, Daniel Vetter wrote:
> > On Thu, Apr 06, 2023 at 06:05:11PM +0900, Asahi Lina wrote:
> > > On 06/04/2023 17.27, Daniel Vetter wrote:
> > > > On Thu, 6 Apr 2023 at 10:22, C
| 14 +-
> drivers/gpu/host1x/Kconfig | 2 +-
> drivers/gpu/host1x/bus.c| 6 +-
> drivers/gpu/host1x/context.c| 24 ++--
> drivers/gpu/host1x/mipi.c | 4 +-
> drivers/gpu/host1x/syncpt.c | 8 +-
> drivers/staging/media/tegra-video/csi.c | 8 +-
> drivers/staging/media/tegra-video/vi.c | 8 +-
> include/linux/host1x.h | 2 +-
> 30 files changed, 370 insertions(+), 465 deletions(-)
> create mode 100644 drivers/gpu/drm/tegra/fbdev.c
--
Daniel Vetter
Software Engineer, Intel Corporation
http://blog.ffwll.ch
bridge/ti-sn65dsi83, ps8640.
> > - Use pci aperture helpers in drm/ast lynxfb, radeonfb.
> > - Revert some lima patches, as they required a commit that has been
> >reverted upstream.
> > - Add AUO NE135FBM-N41 v8.1 eDP panel.
> > - Add QAIC accel driver.
> >
>
/gpu/drm/i915/i915_getparam.c | 2 +-
> drivers/gpu/drm/i915/i915_pci.c| 1 +
> drivers/gpu/drm/i915/i915_perf.c | 570
> -
> drivers/gpu/drm/i915/i915_perf.h | 4 +-
> drivers/gpu/drm/i915/i915_perf_oa_regs.h | 78 +++
> drivers/gpu/drm/i915/i915_perf_types.h | 75 ++-
> drivers/gpu/drm/i915/i915_pmu.c| 10 +-
> drivers/gpu/drm/i915/i915_reg.h| 14 +-
> drivers/gpu/drm/i915/i915_scatterlist.c| 2 +-
> drivers/gpu/drm/i915/i915_vma.c| 3 +-
> drivers/gpu/drm/i915/intel_device_info.h | 1 +
> drivers/gpu/drm/i915/intel_region_ttm.c| 1 +
> drivers/gpu/drm/i915/intel_uncore.c| 47 +-
> drivers/gpu/drm/i915/selftests/i915_gem_gtt.c | 356 -
> include/uapi/drm/i915_drm.h| 25 +-
> 63 files changed, 1241 insertions(+), 637 deletions(-)
--
Daniel Vetter
Software Engineer, Intel Corporation
http://blog.ffwll.ch
On Thu, Apr 06, 2023 at 12:45:12PM +0200, Lucas Stach wrote:
> Am Donnerstag, dem 06.04.2023 um 10:27 +0200 schrieb Daniel Vetter:
> > On Thu, 6 Apr 2023 at 10:22, Christian König
> > wrote:
> > >
> > > Am 05.04.23 um 18:09 schrieb Luben Tuikov:
> > &g
On Thu, Apr 06, 2023 at 01:44:22PM +0900, Asahi Lina wrote:
> On 05/04/2023 23.37, Daniel Vetter wrote:
> > On Tue, Mar 07, 2023 at 11:25:43PM +0900, Asahi Lina wrote:
> > > +/// A generic monotonically incrementing ID used to uniquely identify
> > > object instances w
r not
has zero impact on whether I'll read a mail or not. It just kinda
disappears into the big lable:cc bucket ...
-Daniel
>
> On 06/04/2023 13.44, Asahi Lina wrote:
> > On 05/04/2023 23.37, Daniel Vetter wrote:
> > > On Tue, Mar 07, 2023 at 11:25:43PM +0900, Asahi
On Thu, Apr 06, 2023 at 02:02:55PM +0900, Asahi Lina wrote:
> On 05/04/2023 23.44, Daniel Vetter wrote:
> > On Tue, Mar 07, 2023 at 11:25:43PM +0900, Asahi Lina wrote:
> > > +/// Look up a GEM object handle for a `File` and return an `ObjectRef`
> > > for it.
> &g
On Thu, Apr 06, 2023 at 01:02:34PM +0200, Javier Martinez Canillas wrote:
> This helper is just a wrapper that calls drm_connector_cleanup(), there's
> no need to have another level of indirection.
>
> Signed-off-by: Javier Martinez Canillas
On both patches:
Reviewed-by: Dani
On Thu, Apr 06, 2023 at 01:44:22PM +0900, Asahi Lina wrote:
> On 05/04/2023 23.37, Daniel Vetter wrote:
> > On Tue, Mar 07, 2023 at 11:25:43PM +0900, Asahi Lina wrote:
> > > +/// A generic monotonically incrementing ID used to uniquely identify
> > > object instances w
On Thu, Apr 06, 2023 at 12:14:36PM +0200, Christian König wrote:
> Am 06.04.23 um 08:37 schrieb Daniel Vetter:
> > On Thu, Apr 06, 2023 at 02:08:10AM +, Matthew Brost wrote:
> > > On Wed, Apr 05, 2023 at 12:12:27PM +0200, Daniel Vetter wrote:
> > > > On Wed,
On Thu, Apr 06, 2023 at 06:05:11PM +0900, Asahi Lina wrote:
> On 06/04/2023 17.27, Daniel Vetter wrote:
> > On Thu, 6 Apr 2023 at 10:22, Christian König
> > wrote:
> > >
> > > Am 05.04.23 um 18:09 schrieb Luben Tuikov:
> > > > On 2023-04-05 10:05, D
mething
> silly like having abstraction-specific hw_fence wrappers, and then you run
> into deadlocks due to the scheduler potentially being dropped from the
> job_done callback when the fence reference is dropped and just... no,
> please. This is terrible. If you don't want me to fix it we'll have to find
> another way because I can't work with this.
So generally the hw fence keeps the underlying gpu ctx alive, and the gpu
context keeps the gpu vm alive. Pretty much has to, or your gpu starts
executing stuff that's freed.
Now for fw scheduler gpu ctx isn't just drm_sched_entity, but also
drm_scheduler. Plus whatever driver stuff you have lying around for a ctx.
Plus ofc a reference to the vm, which might in turn keep a ton of gem_bo
alive.
Still I'm not seeing where the fundamental issue is in this refcount
scheme, or where it's silly? Mapping this all to Rust correctly is a
challenge for sure, and also untangling the assumption that drm_scheduler
is suddenly a lot more dynamic object (see my other reply), but
fundamentally calling this all silly and terrible I don't understand ...
-Daniel
--
Daniel Vetter
Software Engineer, Intel Corporation
http://blog.ffwll.ch
m/gpu_scheduler.h
> > +++ b/include/drm/gpu_scheduler.h
> > @@ -295,6 +295,11 @@ struct drm_sched_fence {
> >* @lock: the lock used by the scheduled and the finished fences.
> >*/
> > spinlock_t lock;
> > +/**
> > + * @sched_name: the name of the scheduler that owns this fence. We
> > + * keep a copy here since fences can outlive their scheduler.
> > + */
> > + char sched_name[16];
> > /**
> >* @owner: job owner for debugging
> >*/
> >
> > ---
> > base-commit: fe15c26ee26efa11741a7b632e9f23b01aca4cc6
> > change-id: 20230406-scheduler-uaf-1-994ec34cac93
> >
> > Thank you,
> > ~~ Lina
> >
>
--
Daniel Vetter
Software Engineer, Intel Corporation
http://blog.ffwll.ch
344-1-daniel.vet...@ffwll.ch/
> # 1
I'm still not clued in on why we need this, but I also don't think it's
terrible, so ...
Acked-by: Daniel Vetter
> ---
> drivers/gpu/drm/gma500/psb_drv.c | 48 ++
> drivers/video/aperture.c | 58
onfigure it's output.
>
> Best regards,
> Alexander
>
> > [1]
> > https://lore.kernel.org/all/CAMty3ZD7eFi4o7ZXNtjShoLd5yj3wn85Fm6ZNL89=QpWj4
> > 4...@mail.gmail.com/T/ [2]
> > https://patchwork.kernel.org/project/dri-devel/patch/20230218111712.2380225
> > -6-treapk...@chromium.org/ [3]
> > https://indico.freedesktop.org/event/2/contributions/76/
> > [4] https://www.youtube.com/watch?v=PoYdP9fPn-4&t=624s
> >
> > Thanks,
> > Jagan.
>
>
> --
> TQ-Systems GmbH | Mühlstraße 2, Gut Delling | 82229 Seefeld, Germany
> Amtsgericht München, HRB 105018
> Geschäftsführer: Detlef Schneider, Rüdiger Stahl, Stefan Schneider
> http://www.tq-group.com/
>
>
--
Daniel Vetter
Software Engineer, Intel Corporation
http://blog.ffwll.ch
t;>> Luben
> >>>
> >>>>> 2. Somehow make sure drm_sched_entity_destroy() does block until all
> >>>>> jobs deployed through this entity were fetched from the schedulers
> >>>>> pending list. Though, I'm pretty sure that this is not really desirable.
> >>>>>
> >>>>> 3. Just revert the change and let drivers implement tracking of GPU
> >>>>> active times themselves.
> >>>>>
> >>>> Given that we are already pretty late in the release cycle and etnaviv
> >>>> being the only driver so far making use of the scheduler elapsed time
> >>>> tracking I think the right short term solution is to either move the
> >>>> tracking into etnaviv or just revert the change for now. I'll have a
> >>>> look at this.
> >>>>
> >>>> Regards,
> >>>> Lucas
> >>>>
> >>>>> In the case of just reverting the change I'd propose to also set a jobs
> >>>>> entity pointer to NULL once the job was taken from the entity, such
> >>>>> that in case of a future issue we fail where the actual issue resides
> >>>>> and to make it more obvious that the field shouldn't be used anymore
> >>>>> after the job was taken from the entity.
> >>>>>
> >>>>> I'm happy to implement the solution we agree on. However, it might also
> >>>>> make sense to revert the change until we have a solution in place. I'm
> >>>>> also happy to send a revert with a proper description of the problem.
> >>>>> Please let me know what you think.
> >>>>>
> >>>>> - Danilo
> >>>>>
>
--
Daniel Vetter
Software Engineer, Intel Corporation
http://blog.ffwll.ch
On Wed, 5 Apr 2023 at 19:46, Patrik Jakobsson
wrote:
>
> On Wed, Apr 5, 2023 at 7:15 PM Daniel Vetter wrote:
> >
> > On Wed, 5 Apr 2023 at 18:54, Javier Martinez Canillas
> > wrote:
> > >
> > > Daniel Vetter writes:
> > >
> > > &
On Thu, Apr 06, 2023 at 02:08:10AM +, Matthew Brost wrote:
> On Wed, Apr 05, 2023 at 12:12:27PM +0200, Daniel Vetter wrote:
> > On Wed, 5 Apr 2023 at 11:57, Christian König
> > wrote:
> > >
> > > Am 05.04.23 um 11:07 schrieb Daniel Vetter:
> > > >
On Wed, Apr 05, 2023 at 11:58:44PM +, Matthew Brost wrote:
> On Wed, Apr 05, 2023 at 03:09:08PM +0200, Daniel Vetter wrote:
> > On Tue, Apr 04, 2023 at 07:48:27PM +, Matthew Brost wrote:
> > > On Tue, Apr 04, 2023 at 09:25:52PM +0200, Daniel Vetter wrote:
> > > &
On Wed, Apr 05, 2023 at 06:50:22AM -0700, Rob Clark wrote:
> On Wed, Apr 5, 2023 at 6:31 AM Daniel Vetter wrote:
> >
> > If the crtc is being switched on or off then the semantics of
> > computing the timestampe of the next vblank is somewhat ill-defined.
> > And ind
On Wed, Apr 05, 2023 at 07:42:08PM +0200, Javier Martinez Canillas wrote:
> Daniel Vetter writes:
>
> > On Wed, Apr 05, 2023 at 06:27:17PM +0200, Javier Martinez Canillas wrote:
> >> Daniel Vetter writes:
>
> [...]
>
> >> >
> >> > The __
On Wed, Apr 05, 2023 at 05:43:01PM +0200, Daniel Vetter wrote:
> On Tue, Mar 07, 2023 at 11:25:37PM +0900, Asahi Lina wrote:
> > +/// An armed DRM scheduler job (not yet submitted)
> > +pub struct ArmedJob<'a, T: JobImpl>(Box>, PhantomData<&'a T>);
&g
omas Zimmermann
> Graphics Driver Developer
> SUSE Software Solutions Germany GmbH
> Maxfeldstr. 5, 90409 Nürnberg, Germany
> (HRB 36809, AG Nürnberg)
> Geschäftsführer: Felix Imendörffer
--
Daniel Vetter
Software Engineer, Intel Corporation
http://blog.ffwll.ch
On Wed, Apr 05, 2023 at 06:27:17PM +0200, Javier Martinez Canillas wrote:
> Daniel Vetter writes:
>
> [...]
>
> >>
> >> but only the 'var->xres > fb->width || var->yres > fb->height' from the
> >> conditions checked could be f
On Wed, 5 Apr 2023 at 18:54, Javier Martinez Canillas
wrote:
>
> Daniel Vetter writes:
>
> > On Wed, Apr 05, 2023 at 04:32:19PM +0200, Thomas Zimmermann wrote:
>
> [...]
>
> >> > > >/*
> >> > > > * WARNING: App
ign` above.
> +unsafe { T::Data::from_foreign(data_pointer) };
> +return Err(Error::from_kernel_errno(ret));
> +}
> +
> + this.registered = true;
> +Ok(())
> +}
> +
> +/// Returns a reference to the `Device` instance for this registration.
> +pub fn device(&self) -> &drm::device::Device {
> +&self.drm
> +}
> +}
> +
> +// SAFETY: `Registration` doesn't offer any methods or access to fields when
> shared between threads
> +// or CPUs, so it is safe to share it.
> +unsafe impl Sync for Registration {}
> +
> +// SAFETY: Registration with and unregistration from the drm subsystem can
> happen from any thread.
> +// Additionally, `T::Data` (which is dropped during unregistration) is
> `Send`, so it is ok to move
> +// `Registration` to different threads.
> +#[allow(clippy::non_send_fields_in_send_ty)]
> +unsafe impl Send for Registration {}
> +
> +impl Drop for Registration {
> +/// Removes the registration from the kernel if it has completed
> successfully before.
> +fn drop(&mut self) {
> +if self.registered {
> +// Get a pointer to the data stored in device before destroying
> it.
> +// SAFETY: `drm` is valid per the type invariant
> +let data_pointer = unsafe { (*self.drm.raw_mut()).dev_private };
> +
> +// SAFETY: Since `registered` is true, `self.drm` is both valid
> and registered.
> +unsafe { bindings::drm_dev_unregister(self.drm.raw_mut()) };
> +
> +// Free data as well.
> +// SAFETY: `data_pointer` was returned by `into_foreign` during
> registration.
> +unsafe { ::from_foreign(data_pointer)
> };
> +}
> +}
> +}
> diff --git a/rust/kernel/drm/mod.rs b/rust/kernel/drm/mod.rs
> index 9ec6d7cbcaf3..69376b3c6db9 100644
> --- a/rust/kernel/drm/mod.rs
> +++ b/rust/kernel/drm/mod.rs
> @@ -2,4 +2,6 @@
>
> //! DRM subsystem abstractions.
>
> +pub mod device;
> +pub mod drv;
> pub mod ioctl;
>
> --
> 2.35.1
>
--
Daniel Vetter
Software Engineer, Intel Corporation
http://blog.ffwll.ch
t sparc64 and parisc use pgprot_uncached here, but as
> they don't define a custom pgprot_writecombine, this ends up being
> the same, and they can use the above definition as well.
>
> mips defines pgprot_writecombine but uses pgprot_noncached
> in fb_pgprotect(), which is probably a m
On Wed, Apr 05, 2023 at 04:32:19PM +0200, Thomas Zimmermann wrote:
> Hi
>
> Am 05.04.23 um 15:18 schrieb Daniel Vetter:
> > On Wed, Apr 05, 2023 at 01:16:27PM +0200, Javier Martinez Canillas wrote:
> > > Thomas Zimmermann writes:
> > >
> > > [...]
>
r {
> +const OPS: bindings::drm_sched_backend_ops =
> bindings::drm_sched_backend_ops {
> +prepare_job: Some(prepare_job_cb::),
> + can_run_job: Some(can_run_job_cb::),
> +run_job: Some(run_job_cb::),
> +timedout_job: Some(timedout_job_cb::),
> +free_job: Some(free_job_cb::),
Two general questions with no relevance here really, just about vtable
best practices:
So the trait has default impls for exactly the functions that are optional
here, but either way we always end up with non-NULL function pointers. I
guess there's no way to avoid that when you have a nice wrapping with
traits and all that like here?
Another unrelated thing: How const is const? The C code side generally
uses ops pointers for runtime time casting, so if the const is less const
that a naive C hacker would expect, it might result in some fun.
Cheers, Daniel
> +};
> +/// Creates a new DRM Scheduler object
> +// TODO: Shared timeout workqueues & scores
> +pub fn new(
> +device: &impl device::RawDevice,
> +hw_submission: u32,
> +hang_limit: u32,
> +timeout_ms: usize,
> +name: &'static CStr,
> +) -> Result> {
> +let mut sched: UniqueArc>> =
> UniqueArc::try_new_uninit()?;
> +
> +// SAFETY: The drm_sched pointer is valid and pinned as it was just
> allocated above.
> +to_result(unsafe {
> +bindings::drm_sched_init(
> +addr_of_mut!((*sched.as_mut_ptr()).sched),
> +&Self::OPS,
> +hw_submission,
> +hang_limit,
> +
> bindings::msecs_to_jiffies(timeout_ms.try_into()?).try_into()?,
> +core::ptr::null_mut(),
> +core::ptr::null_mut(),
> +name.as_char_ptr(),
> +device.raw_device(),
> +)
> +})?;
> +
> +// SAFETY: All fields of SchedulerInner are now initialized.
> +Ok(Scheduler(unsafe { sched.assume_init() }.into()))
> +}
> +}
>
> --
> 2.35.1
>
--
Daniel Vetter
Software Engineer, Intel Corporation
http://blog.ffwll.ch
m_gem_object {
>* The current LRU list that the GEM object is on.
>*/
> struct drm_gem_lru *lru;
> +
> + /**
> + * @exportable:
> + *
> + * Whether this GEM object can be exported via the
> drm_gem_object_funcs->export
> + * callback. Defaults to true.
> + */
> + bool exportable;
> };
>
> /**
>
> --
> 2.35.1
>
--
Daniel Vetter
Software Engineer, Intel Corporation
http://blog.ffwll.ch
m_device and hence Driver) can ensure at the
type level that you only put the right objects into the drm_file
- a generic lookup_handle function on the drm_file knows the Driver trait
and so can give you back the right type right away.
Why the wrapping, what do I miss?
-Daniel
--
Daniel V
On Wed, Apr 05, 2023 at 04:14:11PM +0200, Christian König wrote:
> Am 05.04.23 um 15:40 schrieb Daniel Vetter:
> > On Tue, Mar 07, 2023 at 11:25:35PM +0900, Asahi Lina wrote:
> > > Some hardware may require more complex resource utilization accounting
> > > than the si
t; + list_for_each_entry_safe(s_job, tmp, &sched->pending_list, list) {
> + spin_lock(&sched->job_list_lock);
> + list_del_init(&s_job->list);
> + spin_unlock(&sched->job_list_lock);
> + sched->ops->free_job(s_job);
&g
gt; + * hardware is free enough to run the job. This can be used to
> + * implement more complex hardware resource policies than the
> + * hw_submission limit.
> + */
> + bool (*can_run_job)(struct drm_sched_job *sched_job);
> +
> /**
> * @run_job: Called to execute the job once all of the dependencies
> * have been resolved. This may be called multiple times, if
>
> --
> 2.35.1
>
--
Daniel Vetter
Software Engineer, Intel Corporation
http://blog.ffwll.ch
b9c90...@linaro.org/
Fixes: d39e48ca80c0 ("drm/atomic-helper: Set fence deadline for vblank")
Cc: Ville Syrjälä
Cc: Rob Clark
Cc: Daniel Vetter
Cc: Maarten Lankhorst
Cc: Maxime Ripard
Cc: Thomas Zimmermann
Reported-by: Dmitry Baryshkov
Tested-by: Dmitry Baryshkov # test patch only
Cc: D
On Wed, Apr 05, 2023 at 03:25:15PM +0300, Ville Syrjälä wrote:
> On Wed, Apr 05, 2023 at 10:16:50AM +0200, Daniel Vetter wrote:
> > If the crtc is being switched on or off then the semantics of
> > computing the timestampe of the next vblank is somewhat ill-defined.
> > And in
gt; >
> > Regards,
> > Jacek
>
> Hi,
> Entire patch-set is:
> Acked-by: Oded Gabbay
Once Jacke has pushed this I htink it would also be good to get Jeffrey
commit rights for drm-misc, so that in the future bugfixes for the qaic
driver can be pushed directly by the qaic team.
On Wed, Apr 05, 2023 at 12:21:11PM +0200, Javier Martinez Canillas wrote:
> Daniel Vetter writes:
>
> > Drivers are supposed to fix this up if needed if they don't outright
> > reject it. Uncovered by 6c11df58fd1a ("fbmem: Check virtual screen
> > sizes in fb_s
On Wed, Apr 05, 2023 at 12:52:12PM +0200, Javier Martinez Canillas wrote:
> Daniel Vetter writes:
>
> > Apparently drivers need to check all this stuff themselves, which for
> > most things makes sense I guess. And for everything else we luck out,
> > because modern di
ng special about that part at all) then I think it needs to be at
least a common "nuke a legacy vga device for me pls" function, which
shares the implementation with the pci one.
But not open-coding just half of it only.
> > if (ret)
> > return ret;
> >
> > return 0;
> > }
> >
>
> If this is enough I agree that is much more easier code to understand.
It's still two calls and more code with more bugs? I'm not seeing the
point.
-Daniel
--
Daniel Vetter
Software Engineer, Intel Corporation
http://blog.ffwll.ch
On Tue, Apr 04, 2023 at 07:48:27PM +, Matthew Brost wrote:
> On Tue, Apr 04, 2023 at 09:25:52PM +0200, Daniel Vetter wrote:
> > On Tue, Apr 04, 2023 at 07:02:23PM +, Matthew Brost wrote:
> > > On Tue, Apr 04, 2023 at 08:14:01PM +0200, Thomas Hellström (Intel) wrote:
>
On Wed, Apr 05, 2023 at 02:39:35PM +0200, Christian König wrote:
> Am 05.04.23 um 14:35 schrieb Thomas Hellström:
> > Hi,
> >
> > On 4/4/23 21:25, Daniel Vetter wrote:
> > > On Tue, Apr 04, 2023 at 07:02:23PM +, Matthew Brost wrote:
> > > > On Tue
On Wed, Apr 05, 2023 at 02:32:12PM +0200, Miguel Ojeda wrote:
> On Wed, Apr 5, 2023 at 1:23 PM Daniel Vetter wrote:
> >
> > Ok if this is just interim I think it's fine. Would still be good to have
> > the MAINTAINERS entry though even just to cover the interim st
x27;m
still not sold on the idea that this is any good and doesn't just bend the
dma_fence and syncobj rules a bit too much over the breaking point. For
kernel drivers it really should be just a different way to lookup and
return dma_fence from the ioctl, pretty much matching what you could also
do with sync_file (but since syncobj provides generic compat ioctl to
convert to/from sync_file drivders only need to handle syncobj).
-Daniel
> +SyncObj { ptr: self.ptr }
> +}
> +}
> +
> +// SAFETY: drm_syncobj operations are internally locked.
> +unsafe impl Sync for SyncObj {}
> +unsafe impl Send for SyncObj {}
>
> --
> 2.35.1
>
--
Daniel Vetter
Software Engineer, Intel Corporation
http://blog.ffwll.ch
pretty common pattern we're going to hit everywhere.
> >
>
> I just want to mention, there is a different way to do the abstraction
> here:
>
> similar to
> https://lore.kernel.org/rust-for-linux/ZA9l0EHCRRr%2Fmyoq@boqun-archlinux
>
> * Define Device as tranparent represention of struct drm_device:
>
> #[repr(transparent)]
> struct Device(Opaque);
>
> * impl `AlwaysRefCounted`[1] for `Device`, therefore we can use
> `ARef`[2] as a smart pointer to `drm_device`.
>
> * drm_device related methods are still implemented in `impl Device`
>
> * In `open_callback`, we can just get a `&Device` from `*mut
> bindings::drm_device` unsafely, and that's all. Or introduce a helper
> function if we want:
>
> pub unsafe fn with_device(ptr: *mut drm_device, f: F) -> Result
> where
> F: FnOnce(&Device) -> Result
> {
> let d = unsafe { &*ptr };
> f(d)
> }
>
> The main difference is that we now treat a pointer to drm_device as a
> reference to the device, not the owner.
>
> It seems we need to also change our driver/device framework to use this
> approach, but it looks better to me.
So I really don't have enough rust clue to have any useful opinion on how
the glue should look like, but semantically the struct drm_file should
only ever be borrowed as a parameter to a driver hook, so that rust can
guarantee that the driver doesn't do anything funny and uses it beyond the
end of that function. This holds for all the callbacks like open/close or
also all the ioctl.
The other semantic thing is that that the ioctls should be able to rely on
open having fully constructed the thing. I think the trait and dependent
type stuff ensure that?
What I've missed (but maybe just looked in the wrong place) is that the
ioctl support (and really anything else where the driver gets a struct
drm_file on the C side, but I don't think there is anything else) should
also make sure you get the right driver-specific type and not something
else.
I did notice the FIXME in the first patch, I guess if it makes
landing all this easier we could also keep this as a todo item to improve
once things landed. That btw holds for a lot of the big "how to map
semantics correctly to rust" questions I'm throwing up here. Maybe a
Documentation/gpu/rust.rst file would be good to include, with these todo
items noted instead of just FIXME sprinkled in patches? At least for
things that will take more effort to polish.
-Daniel
>
> Regards,
> Boqun
>
> [1]: https://rust-for-linux.github.io/docs/kernel/trait.AlwaysRefCounted.html
> [2]: https://rust-for-linux.github.io/docs/kernel/struct.ARef.html
>
> > ~Faith
--
Daniel Vetter
Software Engineer, Intel Corporation
http://blog.ffwll.ch
On Wed, Apr 05, 2023 at 01:19:47PM +0200, Miguel Ojeda wrote:
> On Wed, Apr 5, 2023 at 1:08 PM Daniel Vetter wrote:
> >
> > Uh all the rust helper wrappers for all the kernel in a single file does
> > not sound good. Can we not split these up into each subsystem, and then
as *mut FenceObject
> +};
> +
> +if p.is_null() {
> +return Err(ENOMEM);
> + }
> +
> +let seqno = self.seqnos[context as usize].fetch_add(1,
> Ordering::Relaxed);
> +
> +// SAFETY: The pointer is valid, so pointers to members are too.
> +// After this, all fields are initialized.
> +unsafe {
> +addr_of_mut!((*p).inner).write(inner);
> +bindings::__spin_lock_init(
> +addr_of_mut!((*p).lock) as *mut _,
> +self.lock_name.as_char_ptr(),
> +self.lock_key.get(),
> +);
> +bindings::dma_fence_init(
> +addr_of_mut!((*p).fence),
> +&FenceObjectVTABLE,
> +addr_of_mut!((*p).lock) as *mut _,
> +self.start + context as u64,
> +seqno,
> +);
> +};
> +
> +Ok(UniqueFence(p))
> +}
> +}
> +
> +/// A DMA Fence Chain Object
> +///
> +/// # Invariants
> +/// ptr is a valid pointer to a dma_fence_chain which we own.
> +pub struct FenceChain {
> +ptr: *mut bindings::dma_fence_chain,
> +}
> +
> +impl FenceChain {
> +/// Create a new DmaFenceChain object.
> +pub fn new() -> Result {
> +// SAFETY: This function is safe to call and takes no arguments.
> +let ptr = unsafe { bindings::dma_fence_chain_alloc() };
> +
> +if ptr.is_null() {
> +Err(ENOMEM)
> +} else {
> +Ok(FenceChain { ptr })
> +}
> +}
> +
> +/// Convert the DmaFenceChain into the underlying raw pointer.
> +///
> +/// This assumes the caller will take ownership of the object.
> +pub(crate) fn into_raw(self) -> *mut bindings::dma_fence_chain {
> +let ptr = self.ptr;
> +core::mem::forget(self);
> +ptr
> +}
> +}
> +
> +impl Drop for FenceChain {
> +fn drop(&mut self) {
> +// SAFETY: We own this dma_fence_chain.
> +unsafe { bindings::dma_fence_chain_free(self.ptr) };
> +}
> +}
> diff --git a/rust/kernel/lib.rs b/rust/kernel/lib.rs
> index cb23d24c6718..31866069e0bc 100644
> --- a/rust/kernel/lib.rs
> +++ b/rust/kernel/lib.rs
> @@ -36,6 +36,8 @@ mod allocator;
> mod build_assert;
> pub mod delay;
> pub mod device;
> +#[cfg(CONFIG_DMA_SHARED_BUFFER)]
> +pub mod dma_fence;
> pub mod driver;
> #[cfg(CONFIG_RUST_DRM)]
> pub mod drm;
>
> --
> 2.35.1
>
--
Daniel Vetter
Software Engineer, Intel Corporation
http://blog.ffwll.ch
pub fn into_ref(self) -> ObjectRef {
> +let ptr = self.ptr as *const _;
> +core::mem::forget(self);
> +
> +ObjectRef { ptr }
> +}
> +}
> +
> +impl Drop for UniqueObjectRef {
> +fn drop(&mut self) {
> +// SAFETY: Having a UniqueObjectRef implies holding a GEM
> +// reference. The free callback will take care of deallocation.
> +unsafe {
> +bindings::drm_gem_object_put((*self.ptr).gem_obj());
> +}
> +}
> +}
> +
> +impl Deref for UniqueObjectRef {
> +type Target = T;
> +
> +fn deref(&self) -> &Self::Target {
> +// SAFETY: The pointer is valid per the invariant
> +unsafe { &*self.ptr }
> +}
> +}
> +
> +impl DerefMut for UniqueObjectRef {
> +fn deref_mut(&mut self) -> &mut Self::Target {
> +// SAFETY: The pointer is valid per the invariant
> +unsafe { &mut *self.ptr }
> +}
> +}
> +
> +pub(super) fn create_fops() -> bindings::file_operations {
> +bindings::file_operations {
> +owner: core::ptr::null_mut(),
> +open: Some(bindings::drm_open),
> +release: Some(bindings::drm_release),
> +unlocked_ioctl: Some(bindings::drm_ioctl),
> +#[cfg(CONFIG_COMPAT)]
> +compat_ioctl: Some(bindings::drm_compat_ioctl),
> +#[cfg(not(CONFIG_COMPAT))]
> +compat_ioctl: None,
> +poll: Some(bindings::drm_poll),
> +read: Some(bindings::drm_read),
> +llseek: Some(bindings::noop_llseek),
> +mmap: Some(bindings::drm_gem_mmap),
> +..Default::default()
> +}
> +}
> diff --git a/rust/kernel/drm/mod.rs b/rust/kernel/drm/mod.rs
> index a767942d0b52..c44760a1332f 100644
> --- a/rust/kernel/drm/mod.rs
> +++ b/rust/kernel/drm/mod.rs
> @@ -5,4 +5,5 @@
> pub mod device;
> pub mod drv;
> pub mod file;
> +pub mod gem;
> pub mod ioctl;
>
> --
> 2.35.1
>
--
Daniel Vetter
Software Engineer, Intel Corporation
http://blog.ffwll.ch
On Wed, 5 Apr 2023 at 11:57, Christian König wrote:
>
> Am 05.04.23 um 11:07 schrieb Daniel Vetter:
> > [SNIP]
> >> I would approach it from the complete other side. This component here is a
> >> tool to decide what job should run next.
> >>
> >>
++
> drivers/gpu/drm/i915/gt/uc/intel_huc.h | 7 +--
> drivers/gpu/drm/i915/i915_perf.c | 6 +++---
> 6 files changed, 25 insertions(+), 14 deletions(-)
>
> --
> Jani Nikula, Intel Open Source Graphics Center
--
Daniel Vetter
Software Engineer, Intel Corporation
http://blog.ffwll.ch
On Wed, Apr 05, 2023 at 11:26:51AM +0200, Thomas Zimmermann wrote:
> Hi
>
> Am 05.04.23 um 10:59 schrieb Daniel Vetter:
> > On Wed, Apr 05, 2023 at 10:07:54AM +0200, Thomas Zimmermann wrote:
> > > Hi
> > >
> > > Am 05.04.23 um 09:49 schrieb Thomas
On Wed, 5 Apr 2023 at 11:11, Tvrtko Ursulin
wrote:
>
>
> On 05/04/2023 09:28, Daniel Vetter wrote:
> > On Tue, 4 Apr 2023 at 12:45, Tvrtko Ursulin
> > wrote:
> >>
> >>
> >> Hi,
> >>
> >> On 03/04/2023 20:40, Joshua Ashton wrote:
On Wed, Apr 05, 2023 at 10:19:55AM +0200, Thomas Zimmermann wrote:
> Hi
>
> Am 04.04.23 um 22:18 schrieb Daniel Vetter:
> > This one nukes all framebuffers, which is a bit much. In reality
> > gma500 is igpu and never shipped with anything discrete, so there should
>
On Wed, Apr 05, 2023 at 10:53:26AM +0200, Christian König wrote:
> Am 05.04.23 um 10:34 schrieb Daniel Vetter:
> > On Wed, Apr 05, 2023 at 09:41:23AM +0200, Christian König wrote:
> > > Am 04.04.23 um 15:37 schrieb Matthew Brost:
> > > > On Tue, Apr 04, 2023 at 11:
On Wed, Apr 05, 2023 at 10:07:54AM +0200, Thomas Zimmermann wrote:
> Hi
>
> Am 05.04.23 um 09:49 schrieb Thomas Zimmermann:
> > Hi
> >
> > Am 04.04.23 um 22:18 schrieb Daniel Vetter:
> > > This one nukes all framebuffers, which is a bit much. In reality
&g
601 - 700 of 8060 matches
Mail list logo