On 2016-01-04 4:46 PM, Robert O'Callahan wrote:
> On Tue, Jan 5, 2016 at 10:46 AM, Kearwood "Kip" Gilbert <
> kgilb...@mozilla.com> wrote:
>
>> In WebVR, we often present UI as a Head's Up Display (HUD) that floats
>> in front of the user.  Additionally, we often wish to show 2d graphics,
>> video, and CSS animations as a texture in 3d scenes.  Creating these
>> textures is something that CSS and HTML are great at.
>>
>> Unfortunately, I am not aware of an easy and efficient way to capture an
>> animated of an interactive HTML Element and bring it into the WebGL
>> context.  A "moz-element" -like API would be useful here.
>>
>> Perhaps we could solve this by implementing and extending the proposed
>> WEBGL_dynamic_texture extension:
>>
>>
>> https://www.khronos.org/registry/webgl/extensions/proposals/WEBGL_dynamic_texture/
>
> This proposal seems unnecessarily complex. Is there a way for me to send
> feedback?

There is a mailing list, public_we...@khronos.org:

https://www.khronos.org/webgl/public-mailing-list/

There is also a github repository with issues and pull requests:

https://github.com/KhronosGroup/WebGL/tree/master/extensions/proposals

>
> Essentially, we would extend the same API but allow the WDTStream
>> interface to apply to more HTML elements, not just HTMLCanvasElement,
>> HTMLImageElement, or HTMLVideoElement.
>>
>> We would also need to implement WEBGL_security_sensitive_resources to
>> enforce the security model:
>>
>>
>> https://www.khronos.org/registry/webgl/extensions/WEBGL_security_sensitive_resources/
>
> I wish I'd known about this proposal earlier! This looks pretty good,
> though I'd always thought this would be too complicated to spec and
> implement to be practical. Glad to be wrong! Although I think we should get
> as much feedback as possible on this in case of hidden gotchas.
>
> Does this sound like a good idea?  I feel that this is something that
>> all WebGL developers would want, as it would make building front-ends
>>
> for games much easier.
> Yes, I think together these would be very useful.
>
> If others feel the same, I would also like to follow up with a proposal
>> to make the captured HTML elements interactive through use of an
>> explicit "pick buffer" added to canvases.
>>
> How would that work? Being able to synthesize mouse (touch?) events in HTML
> elements would add another set of issues.
The intent is to enable content to work transparently with multiple
cursor models implemented at the platform level.  We wish the cursor to
remain responsive even if the page is not. 

As vertex shaders may be applied to elements and elements may be
occluded by other objects in a 3d scene, content needs to describe the
relationship between the original coordinate system of the elements and
the rendered output.  The proposed method of doing this is with a
pick-buffer.

A pick buffer can be viewed as a simple GPU accelerated raycasting
engine to the underlying scene.

A pick buffer is composed of an additional render target, which can be
either rendered in a separate pass or using WEBGL_draw_buffers in a
single pass.  Rather than interpreting the 32-bpp data as RGBA images
for display, the fragment shader will encode an object id and
interpolated uv coordinates.  When the user passes their mouse over the
canvas (or adjusts their gaze with a stereoscopic VR cursor), the
pick-buffer would be sampled and interpreted by platform-level code. 
The information within the sample would be used to identify the
underlying element (though an pick buffer id to element id lookup table)
as well as the coordinate within the un-transformed space of the element
to synthesize events on.

Security is a concern, as content may use this feature to obscure the
appearance of an element with discontinuity between the displayed buffer
and the pick buffer.  I would suggest that we enforce a CORS-like policy
that disables pick buffers unless all source textures used to render the
buffer came from sanitized inputs and are not cross-origin.

In the case of implementing a VR web browser with 2d backwards
compatibility, perhaps some restrictions can be relaxed in the case of
chrome-only access to the API.

To avoid performance overhead, the pick buffer could be rendered by
content at lower intervals or with a lower resolution than the rendered
output.
>
> I assume the idea of mixing CSS 3D-transformed elements into a WebGL scene
> has been rejected for some reason?
>
> Rob
Mixing CSS 3D-transformed elements into WebGL is still on the table;
however, this proposal is more future-proof enabling more expression in
content.  In the VR HUD case it is more ergonomic to present panels as
curved surfaces rather than flat planes.  When placing elements within
the 3d scene itself, it is desirable to light the 2d DOM elements with
the same lighting model as the rest of the scene, cast shadows, receive
shadows, and receive reflections.  (Think of interactive 2D dom content
embedded within a glossy reflective panel)

- Kip

_______________________________________________
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform

Reply via email to