On Thu, Mar 12, 2020 at 2:29 AM Gerd Hoffmann <kra...@redhat.com> wrote:
> On Wed, Mar 11, 2020 at 04:36:16PM -0700, Gurchetan Singh wrote: > > On Wed, Mar 11, 2020 at 3:36 AM Gerd Hoffmann <kra...@redhat.com> wrote: > > > > > Hi, > > > > > > > I should've been more clear -- this is an internal > cleanup/preparation > > > and > > > > the per-context changes are invisible to host userspace. > > > > > > Ok, it wasn't clear that you don't flip the switch yet. In general the > > > commit messages could be a bit more verbose ... > > > > > > I'm wondering though why we need the new fence_id in the first place. > > > Isn't it enough to have per-context (instead of global) last_seq? > > > > > > > Heh, that was to leave open the possibility of multiple timelines per > > context. Roughly speaking, > > > > V2 -- multiple processes > > V3 -- multiple processes and multiple threads (due to VK multi-threaded > > command buffers) > > > > I think we all agree on V2. It seems we still have to discuss V3 > > (multi-queue, thread pools, a fence context associated with each thread) > a > > bit more before we start landing pieces. > > While thinking about the whole thing a bit more ... > Do we need virtio_gpu_ctrl_hdr->fence_id at all? > A fence ID could be useful for sharing fences across virtio devices. Think FENCE_ASSIGN_UUID, akin to RESOURCE_ASSIGN_UUID (+dstevens@). > At virtio level it is pretty simple: The host completes the SUBMIT_3D > virtio command when it finished rendering, period. > On the guest side we don't need the fence_id. The completion callback > gets passed the virtio_gpu_vbuffer, so it can figure which command did > actually complete without looking at virtio_gpu_ctrl_hdr->fence_id. > > On the host side we depend on the fence_id right now, but only because > that is the way the virgl_renderer_callbacks->write_fence interface is > designed. We have to change that anyway for per-context (or whatever) > fences, so it should not be a problem to drop the fence_id dependency > too and just pass around an opaque pointer instead. > For multiple GPU timelines per context, the (process local) sync object handle looks interesting: https://patchwork.kernel.org/patch/9758565/ Some have extended EXECBUFFER to support this flow: https://patchwork.freedesktop.org/patch/msgid/1499289202-25441-1-git-send-email-jason.ekstr...@intel.com > cheers, > Gerd > >
_______________________________________________ dri-devel mailing list dri-devel@lists.freedesktop.org https://lists.freedesktop.org/mailman/listinfo/dri-devel