Hi Guestavo, 2016년 03월 24일 23:39에 Gustavo Padovan 이(가) 쓴 글: > Hi Inki, > > 2016-03-24 Inki Dae <inki....@samsung.com>: > >> Hi, >> >> 2016년 03월 24일 03:47에 Gustavo Padovan 이(가) 쓴 글: >>> From: Gustavo Padovan <gustavo.pado...@collabora.co.uk> >>> >>> Hi, >>> >>> This is a first proposal to discuss the addition of in-fences support >>> to DRM. It adds a new struct to fence.c to abstract the use of sync_file >>> in DRM drivers. The new struct fence_collection contains a array with all >>> fences that a atomic commit needs to wait on >> >> As I mentioned already like below, >> http://www.spinics.net/lists/dri-devel/msg103225.html >> >> I don't see why Android specific thing is tried to propagate to Linux DRM. >> In Linux mainline, it has already implicit sync interfaces for DMA devices >> called dma fence which forces registering a fence obejct to DMABUF through a >> reservation obejct when a dmabuf object is created. However, Android sync >> driver creates a new file for a sync object and this would have different >> point of view. >> >> Is there anyone who can explan why Android specific thing is tried to spread >> into Linux DRM? Was there any consensus to use Android sync driver - which >> uses explicit sync interfaces - as Linux standard? > > Because we want explicit fencing as the Linux standard in the future to > be able to do smart scheduling, e.g., send async jobs to the gpu and at > the same time send async atomic commits with sync_file fd attached so > they can wait the GPU to finish and we don't block in userspace anymore, > quite similar to what Android does.
GPU is also DMA device so I think the synchonization should be handled transparent to user-space. And I know that Chromium guy already did similar thing with non-atomic commit only using implicit sync, https://chromium.googlesource.com/chromiumos/third_party/kernel branch name : chromeos-3.14 Of course, this approach uses a new helper framework placed in drm directory so I think if this framework can be moved into dma-buf directory after some cleanup and refactoring them if necessary. Anyway, I'm not sure I understood the smart scheduling you mentioned but I think we could do what you try to do without the explicit fence. > > This would still use dma-buf fences in the driver level, but it has a > lot more advantages than implicit fencing. You means things for rendering pipeline debugging and merging sync fences? Thanks, Inki Dae > > Gustavo > >