Hi Ian, Rik, Jatinder, Jay and Tom,
we're working on the W3C Media Capture Depth Stream Extension[1] that
adds support for depth cameras[2] to the getUserMedia() API. As part of
that we've been exploring the different media processing pipelines for
the Web Platform that will let us extract and post process the data from
these media streams.
The difference with the depth streams is that they are Uint16Array based
instead of the normal Uint8ClampedArray viewed as an RGBA
ArrayBufferView. To help treat Depth Streams like other Media Streams we
have taken an approach that packs 16 bits losslessly into existing
codes[3].
For WebGL we are proposing a minimal extension to texImage2D and
texSubImage2D where they are passed an HTMLVideoElement that includes a
Depth Stream based track.
For the 2D canvas based pipeline we have proposed[4] a parallel model to
ImageData called DepthData. It is very similar to ImageData except the
.data is a Uint16Array and it adds some new CanvasRenderingContext2D
methods like .getDepthData() that will return a DepthData object.
We're currently drafting this specification in it's own extension
spec[1], the FPWD was published 7 October 2014. Now we'd ask for your
feedback on the design to make sure we integrate cleanly with the
CanvasRenderingContext2D to allow the extension to be integrated at a
later stage if so desired.
Thanks.
roBman
[1] http://w3c.github.io/mediacapture-depth/
[2] https://www.w3.org/wiki/Media_Capture_Depth_Stream_Extension
[3] http://web4.cs.ucl.ac.uk/staff/j.kautz/publications/depth-streaming.pdf
[4]
http://w3c.github.io/mediacapture-depth/#canvasrenderingcontext2d-interface