On 2025/06/21 4:47, Yiwei Zhang wrote:
On Thu, Jun 19, 2025 at 11:45 PM Alex Bennée <alex.ben...@linaro.org> wrote:

Yiwei Zhang <zzyi...@gmail.com> writes:

On Sun, Jun 8, 2025 at 1:24 AM Akihiko Odaki
<od...@rsg.ci.i.u-tokyo.ac.jp> wrote:

On 2025/06/06 1:26, Alex Bennée wrote:
From: Yiwei Zhang <zzyi...@gmail.com>

Venus and later native contexts have their own fence context along with
multiple timelines within. Fences wtih VIRTIO_GPU_FLAG_INFO_RING_IDX in
the flags must be dispatched to be created on the target context. Fence
signaling also has to be handled on the specific timeline within that
target context.

Before this change, venus fencing is completely broken if the host
driver doesn't support implicit fencing with external memory objects.
Frames can go backwards along with random artifacts on screen if the
host driver doesn't attach an implicit fence to the render target. The
symptom could be hidden by certain guest wsi backend that waits on a
venus native VkFence object for the actual payload with limited present
modes or under special configs. e.g. x11 mailbox or xwayland.

After this change, everything related to venus fencing starts making
sense. Confirmed this via guest and host side perfetto tracing.

Cc: qemu-sta...@nongnu.org
Fixes: 94d0ea1c1928 ("virtio-gpu: Support Venus context")

I suggest moving this in the front of the patch series to ease backporting.

I also wonder if "[PULL 11/17] ui/gtk-gl-area: Remove extra draw call in
refresh" requires Cc: qemu-sta...@nongnu.org. Fixing -display gtk,gl=on
for Wayland sounds as important as this patch.

Regards,
Akihiko Odaki

Hi Alex,

+1 for Akihiko's point. I'm also curious when will the venus fix land
in-tree?

We have a problem that there are two contradictory bugs - one that shows
up in the x86/kvm case and one in the aarch64/tcg case. Both are caused
by the weird lifetime semantics of the virgl resource vs QEMU memory
region and we haven't found a solution that solves both yet.

Sounds like irrelevant to the venus fix. Might be worth filing a
virglrenderer issue with some details. More eyes would be helpful if
this turns out to be some known kvm issue seen before on other vmms.

This patch itself looks good to me so:

Reviewed-by: Akihiko Odaki <od...@rsg.ci.i.u-tokyo.ac.jp>

It's up to Alex whether to wait until other patches (the blob mapping/memory region fix in particular) piles up or not.

Regards,
Akihiko Odaki

Reply via email to