I don't know if this is useful or not, but I've pretty succesfully combined a JavaFX UI with the MPV video player (also VLC), without resorting to any kind of frame copying.

It basically involves finding the HWND id (or Window id on Linux) of
a JavaFX Stage, then telling MPV / VLC to render directly to that window. A 2nd (transparent where needed) window is then used to render the JavaFX content.

A top-level Stage is created, for which we find out the HWND. Here
we allow MPV to render its content. The top-level Stage also has a
child stage, which tracks the size and location of the main stage.
This stage is transparent and contains a JavaFX Scene that can be
given a (partially) transparent background made to show the content
of the main stage. Child stages automatically appear on top of their
parent stage.

Here's a screenshot of how that looks, where the control panel, the timer, clock and pause indicator are JavaFX and the background is MPV:

https://github.com/hjohn/MediaSystem-v2/blob/master/screenshot-5.jpg

Although this works pretty well, there are some limitations. It may
not work as well on Linux or Mac, as I rarely test this solution there.
Secondly, you cannot do any kind of special effects on the MPV content
(like a blur or something).

--John

On 15/02/2021 14:40, Mark Raynsford wrote:
Hello!

I'd like to use JavaFX for the UI of an application that will
involve rendering using an existing Vulkan-based renderer. For the sake
of example, assume that the application looks and behaves a bit like
the Unreal Engine 4 editing tools. Here's an example of those:

  https://www.youtube.com/watch?v=2UowdJetXwA

My understanding right now is that there isn't direct support in
JavaFX for building this kind of application, and the primary reason
for this is that there's a sort of conceptual wrestling match for
control of a platform-specific rendering context here. For example:

  * A JavaFX application will tell JavaFX to open a new window,
    and the JavaFX implementation will do all of the
    OS-windowing-system-specific things to achieve this, and will
    also set up a system-specific rendering context depending on
    what the underlying platform is (OpenGL, DirectX, Metal, etc).
    JavaFX then translates input such as mouse and keyboard events
    from OS-specific types to the platform-independent JavaFX types
    so that the application can process them.

  * A typical Vulkan application will ask something analogous to
    the GLFW library to open a new window, and set up a rendering
    context. The GLFW library then translates input such as mouse and
    keyboard events from OS-specific types to generic GLFW event
    types, and the Vulkan application (probably) translates these
    into its own application-specific event types for processing.

Obviously, in a given application, we're limited to having either
one of these things happen, but realistically not both.

The current approach (as of JavaFX 14) seems to be to use the
PixelBuffer API in order to provide a CPU-side bridge between
JavaFX and whatever rendering system is being used for external 3D
rendering. In other words, this is the expected setup:

  1. A JavaFX application will tell JavaFX to open a new window,
     and JavaFX will do all of the system-specific work required
     as described previously.

  2. The application will then tell a library such as GLFW to
     create an _offscreen_ rendering context, perhaps configuring
     Vulkan or OpenGL.

  3. The application, at the end of each frame, copies the contents
     of the offscreen rendering context's framebuffer into a PixelBuffer
     instance to be displayed inside a JavaFX UI.

This, as far as I know, works correctly. The main complaint with
this is that it pays a pretty heavy price: There's one framebuffer-sized
copy operation from the GPU to the CPU (in order to read the required
pixels), and then another framebuffer-sized copy operation back from
the CPU to the GPU (either when writing into the PixelBuffer, or when
JavaFX renders the contents of that PixelBuffer to the screen).

My understanding is that it's essentially necessary to do these two
rather expensive copying operations merely because JavaFX can't and
won't expose the underlying rendering context it uses for its own UI
rendering, and it also can't be expected to talk to whatever other
rendering system the application might be using. The problem is
essentially "we have these two systems both using the GPU, but they
don't know each other and therefore we can't write code to get
memory from one to the other without going via the CPU".

Is this an accurate picture of the situation?

As someone working exclusively with Vulkan, I can arrange to have
the GPU copy the framebuffer into host-visible (not necessarily
host-resident, but host _visible_) memory at the end of each frame.
It's a little sad to have to actually copy that memory over the PCI bus
just to immediately copy it back again, though. Is there no design we
could come up with that would allow for at worst a simple GPU → GPU
copy? I'm resigned to the fact that a copying operation is probably
going to happen _somewhere_, but it'd be nice if we could avoid a
rather expensive and redundant GPU → CPU → GPU copy.

Reply via email to