Title: [288504] branches/safari-613-branch
Revision
288504
Author
repst...@apple.com
Date
2022-01-24 17:55:14 -0800 (Mon, 24 Jan 2022)

Log Message

Cherry-pick r288025. rdar://problem/83407577

    gl.texImage2D upload of getUserMedia streams via <video> element fails
    https://bugs.webkit.org/show_bug.cgi?id=230617
    <rdar://problem/83407577>

    Patch by Kimmo Kinnunen <kkinnu...@apple.com> on 2022-01-14
    Reviewed by Youenn Fablet.

    Source/WebCore:

    Fix MSE camera to WebGL texture uploads.
    Partially revert r280963 for Cocoa MediaPlayer implementatations
    that do not have nativeImageForCurrentTime / pixelBufferForCurrentTime.
    Turns out MSE does not have these implemented, so currently fall back
    to the painting path.

    Test: fast/mediastream/getUserMedia-to-canvas.html

    * html/canvas/WebGLRenderingContextBase.cpp:
    (WebCore::WebGLRenderingContextBase::videoFrameToImage):

    LayoutTests:

    * fast/mediastream/getUserMedia-to-canvas-expected.txt: Added.
    * fast/mediastream/getUserMedia-to-canvas.html: Added.
    Add a test to test getting video frame to 2DContext and
    WebGL canvas elements.

    * webrtc/routines.js:
    Add a function to assert that ImageData contains
    the simulated mock camera image. This is useful
    in verifying that the image is exactly as
    expected. The test tests only the default orientation.
    Later changes will update the test to address the
    camera rotation.

    git-svn-id: https://svn.webkit.org/repository/webkit/trunk@288025 268f45cc-cd09-0410-ab3c-d52691b4dbfc

Modified Paths

Added Paths

Diff

Modified: branches/safari-613-branch/LayoutTests/ChangeLog (288503 => 288504)


--- branches/safari-613-branch/LayoutTests/ChangeLog	2022-01-25 01:55:10 UTC (rev 288503)
+++ branches/safari-613-branch/LayoutTests/ChangeLog	2022-01-25 01:55:14 UTC (rev 288504)
@@ -1,5 +1,67 @@
 2022-01-24  Alan Coon  <alanc...@apple.com>
 
+        Cherry-pick r288025. rdar://problem/83407577
+
+    gl.texImage2D upload of getUserMedia streams via <video> element fails
+    https://bugs.webkit.org/show_bug.cgi?id=230617
+    <rdar://problem/83407577>
+    
+    Patch by Kimmo Kinnunen <kkinnu...@apple.com> on 2022-01-14
+    Reviewed by Youenn Fablet.
+    
+    Source/WebCore:
+    
+    Fix MSE camera to WebGL texture uploads.
+    Partially revert r280963 for Cocoa MediaPlayer implementatations
+    that do not have nativeImageForCurrentTime / pixelBufferForCurrentTime.
+    Turns out MSE does not have these implemented, so currently fall back
+    to the painting path.
+    
+    Test: fast/mediastream/getUserMedia-to-canvas.html
+    
+    * html/canvas/WebGLRenderingContextBase.cpp:
+    (WebCore::WebGLRenderingContextBase::videoFrameToImage):
+    
+    LayoutTests:
+    
+    * fast/mediastream/getUserMedia-to-canvas-expected.txt: Added.
+    * fast/mediastream/getUserMedia-to-canvas.html: Added.
+    Add a test to test getting video frame to 2DContext and
+    WebGL canvas elements.
+    
+    * webrtc/routines.js:
+    Add a function to assert that ImageData contains
+    the simulated mock camera image. This is useful
+    in verifying that the image is exactly as
+    expected. The test tests only the default orientation.
+    Later changes will update the test to address the
+    camera rotation.
+    
+    git-svn-id: https://svn.webkit.org/repository/webkit/trunk@288025 268f45cc-cd09-0410-ab3c-d52691b4dbfc
+
+    2022-01-14  Kimmo Kinnunen  <kkinnu...@apple.com>
+
+            gl.texImage2D upload of getUserMedia streams via <video> element fails
+            https://bugs.webkit.org/show_bug.cgi?id=230617
+            <rdar://problem/83407577>
+
+            Reviewed by Youenn Fablet.
+
+            * fast/mediastream/getUserMedia-to-canvas-expected.txt: Added.
+            * fast/mediastream/getUserMedia-to-canvas.html: Added.
+            Add a test to test getting video frame to 2DContext and
+            WebGL canvas elements.
+
+            * webrtc/routines.js:
+            Add a function to assert that ImageData contains
+            the simulated mock camera image. This is useful
+            in verifying that the image is exactly as
+            expected. The test tests only the default orientation.
+            Later changes will update the test to address the
+            camera rotation.
+
+2022-01-24  Alan Coon  <alanc...@apple.com>
+
         Cherry-pick r288015. rdar://problem/84617690
 
     [LFC][IFC] Incorrect root inline box position when non-empty atomic inline level child box has height: 0px

Added: branches/safari-613-branch/LayoutTests/fast/mediastream/getUserMedia-to-canvas-expected.txt (0 => 288504)


--- branches/safari-613-branch/LayoutTests/fast/mediastream/getUserMedia-to-canvas-expected.txt	                        (rev 0)
+++ branches/safari-613-branch/LayoutTests/fast/mediastream/getUserMedia-to-canvas-expected.txt	2022-01-25 01:55:14 UTC (rev 288504)
@@ -0,0 +1,6 @@
+
+PASS Testing getUserMedia stream to canvas contexts via 2DContext
+PASS Testing getUserMedia stream to canvas contexts via ImageBitmap to 2DContext
+PASS Testing getUserMedia stream to canvas contexts via WebGL texture
+PASS Testing getUserMedia stream to canvas contexts via ImageBitmap to WebGL texture
+

Added: branches/safari-613-branch/LayoutTests/fast/mediastream/getUserMedia-to-canvas.html (0 => 288504)


--- branches/safari-613-branch/LayoutTests/fast/mediastream/getUserMedia-to-canvas.html	                        (rev 0)
+++ branches/safari-613-branch/LayoutTests/fast/mediastream/getUserMedia-to-canvas.html	2022-01-25 01:55:14 UTC (rev 288504)
@@ -0,0 +1,133 @@
+<!doctype html>
+<html>
+    <head>
+        <meta charset="utf-8">
+        <title>Testing getUserMedia stream to canvas contexts</title>
+        <script src=""
+        <script src=""
+        <script src=""
+        <script src=""
+
+        <style type=text/css>
+            canvas { width: 600px }
+        </style>
+    </head>
+    <body>
+        <div id="debuge"></div>
+        <script>
+"use strict";
+const wtu = WebGLTestUtils;
+const width = 643;
+const verifyWidth = 200;
+const debuge = document.getElementById("debuge");
+
+async function createSourceVideo() {
+    let video = document.createElement("video");
+    video.srcObject = await navigator.mediaDevices.getUserMedia({ video: { width: { exact: width } } });
+    await video.play();
+    assert_equals(video.videoWidth, width);
+    return video;
+}
+
+function createVerifyCanvas(video) {
+    let canvas = document.createElement("canvas");
+    canvas.width = verifyWidth;
+    canvas.height = Math.floor(video.videoHeight / video.videoWidth * verifyWidth);
+    return canvas;
+}
+
+function createVerifyWebGLContext(canvas) {
+    let gl = wtu.create3DContext(canvas, { depth: false, stencil: false, antialias: false });
+    gl.viewport(0, 0, canvas.width, canvas.height);
+    let program = wtu.setupTexturedQuad(gl);
+    gl.uniform1i(gl.getUniformLocation(program, "tex"), 0);
+    return gl;
+}
+
+function getFramebufferAsImageData(gl) {
+    let canvas = gl.canvas;
+    let imageData = {
+        width: canvas.width,
+        height: canvas.height,
+        data: new Uint8Array(canvas.width * canvas.height * 4)
+    }
+    gl.readPixels(0, 0, canvas.width, canvas.height, gl.RGBA, gl.UNSIGNED_BYTE, imageData.data);
+    return imageData;
+}
+
+promise_test(async t => {
+    let video = await createSourceVideo();
+    debuge.append(video);
+    let canvas = createVerifyCanvas(video);
+    debuge.appendChild(canvas);
+
+    let ctx = canvas.getContext("2d");
+    ctx.drawImage(video, 0, 0, canvas.width, canvas.height);
+    assertImageDataContainsMockCameraImage(ctx.getImageData(0, 0, canvas.width, canvas.height));
+    debuge.removeChild(canvas);
+    debuge.removeChild(video);
+
+}, document.title + " via 2DContext");
+
+promise_test(async t => {
+    let video = await createSourceVideo();
+    debuge.append(video);
+    let canvas = createVerifyCanvas(video);
+    debuge.appendChild(canvas);
+
+    let ctx = canvas.getContext("2d");
+    let imageBitmap = await createImageBitmap(video);
+    ctx.drawImage(imageBitmap, 0, 0, canvas.width, canvas.height);
+    assertImageDataContainsMockCameraImage(ctx.getImageData(0, 0, canvas.width, canvas.height));
+    debuge.removeChild(canvas);
+    debuge.removeChild(video);
+}, document.title + " via ImageBitmap to 2DContext");
+
+promise_test(async t => {
+    let video = await createSourceVideo();
+    debuge.append(video);
+    let canvas = createVerifyCanvas(video);
+    debuge.appendChild(canvas);
+
+    let gl = createVerifyWebGLContext(canvas);
+    let texture = gl.createTexture();
+    gl.bindTexture(gl.TEXTURE_2D, texture);
+    gl.texParameteri(gl.TEXTURE_2D, gl.TEXTURE_MIN_FILTER, gl.NEAREST);
+    gl.texParameteri(gl.TEXTURE_2D, gl.TEXTURE_MAG_FILTER, gl.NEAREST);
+    gl.texParameteri(gl.TEXTURE_2D, gl.TEXTURE_WRAP_S, gl.CLAMP_TO_EDGE);
+    gl.texParameteri(gl.TEXTURE_2D, gl.TEXTURE_WRAP_T, gl.CLAMP_TO_EDGE);
+    gl.texImage2D(gl.TEXTURE_2D, 0, gl.RGBA, gl.RGBA, gl.UNSIGNED_BYTE, video);
+    wtu.clearAndDrawUnitQuad(gl, [0, 0, 0, 255]);
+
+    let imageData = getFramebufferAsImageData(gl);
+    assertImageDataContainsMockCameraImage(imageData);
+    debuge.removeChild(canvas);
+    debuge.removeChild(video);
+}, document.title + " via WebGL texture");
+
+promise_test(async t => {
+    let video = await createSourceVideo();
+    debuge.append(video);
+    let canvas = createVerifyCanvas(video);
+    debuge.appendChild(canvas);
+
+    let gl = createVerifyWebGLContext(canvas);
+    let texture = gl.createTexture();
+    let imageBitmap = await createImageBitmap(video);
+    gl.bindTexture(gl.TEXTURE_2D, texture);
+    gl.texParameteri(gl.TEXTURE_2D, gl.TEXTURE_MIN_FILTER, gl.NEAREST);
+    gl.texParameteri(gl.TEXTURE_2D, gl.TEXTURE_MAG_FILTER, gl.NEAREST);
+    gl.texParameteri(gl.TEXTURE_2D, gl.TEXTURE_WRAP_S, gl.CLAMP_TO_EDGE);
+    gl.texParameteri(gl.TEXTURE_2D, gl.TEXTURE_WRAP_T, gl.CLAMP_TO_EDGE);
+    gl.texImage2D(gl.TEXTURE_2D, 0, gl.RGBA, gl.RGBA, gl.UNSIGNED_BYTE, imageBitmap);
+    wtu.clearAndDrawUnitQuad(gl, [0, 0, 0, 255]);
+
+    let imageData = getFramebufferAsImageData(gl);
+    assertImageDataContainsMockCameraImage(imageData);
+    debuge.removeChild(canvas);
+    debuge.removeChild(video);
+}, document.title + " via ImageBitmap to WebGL texture");
+
+        </script>
+    </body>
+</html>

Modified: branches/safari-613-branch/LayoutTests/webrtc/routines.js (288503 => 288504)


--- branches/safari-613-branch/LayoutTests/webrtc/routines.js	2022-01-25 01:55:10 UTC (rev 288503)
+++ branches/safari-613-branch/LayoutTests/webrtc/routines.js	2022-01-25 01:55:14 UTC (rev 288504)
@@ -297,3 +297,30 @@
         return (line.indexOf('a=fmtp') === -1 && line.indexOf('a=rtcp-fb') === -1 && line.indexOf('a=rtpmap') === -1) || line.indexOf(baselineNumber) !== -1;
     }).join('\r\n');
 }
+
+// Returns Uint8Array[4] of RGBA color.
+// p: [x, y] of 0..1 range.
+function getImageDataPixel(imageData, p)
+{
+    let xi = Math.floor(p[0] * imageData.width);
+    let yi = Math.floor(p[1] * imageData.height);
+    let i = (yi * imageData.width + xi) * 4;
+    return imageData.data.slice(i, i + 4);
+}
+
+// Asserts that ImageData instance contains mock camera image rendered by MiniBrowser and WebKitTestRunner.
+// Obtain full camera image of size `width`:
+//  await navigator.mediaDevices.getUserMedia({ video: { width: { exact: width } } });
+function assertImageDataContainsMockCameraImage(imageData)
+{
+    const white = [ 255, 255, 255, 255 ];
+    const yellow = [ 255, 255, 0, 255 ];
+    const cyan = [ 0, 255, 255, 255 ];
+    const lightGreen = [ 0, 128, 0, 255 ];
+
+    let err = 3;
+    assert_array_approx_equals(getImageDataPixel(imageData, [ 0.04, 0.7 ]), white, err, "white rect not found");
+    assert_array_approx_equals(getImageDataPixel(imageData, [ 0.08, 0.7 ]), yellow, err, "yellow rect not found");
+    assert_array_approx_equals(getImageDataPixel(imageData, [ 0.12, 0.7 ]), cyan, err, "cyan rect not found");
+    assert_array_approx_equals(getImageDataPixel(imageData, [ 0.16, 0.7 ]), lightGreen, err, "light green rect not found");
+}

Modified: branches/safari-613-branch/Source/WebCore/ChangeLog (288503 => 288504)


--- branches/safari-613-branch/Source/WebCore/ChangeLog	2022-01-25 01:55:10 UTC (rev 288503)
+++ branches/safari-613-branch/Source/WebCore/ChangeLog	2022-01-25 01:55:14 UTC (rev 288504)
@@ -1,5 +1,65 @@
 2022-01-24  Alan Coon  <alanc...@apple.com>
 
+        Cherry-pick r288025. rdar://problem/83407577
+
+    gl.texImage2D upload of getUserMedia streams via <video> element fails
+    https://bugs.webkit.org/show_bug.cgi?id=230617
+    <rdar://problem/83407577>
+    
+    Patch by Kimmo Kinnunen <kkinnu...@apple.com> on 2022-01-14
+    Reviewed by Youenn Fablet.
+    
+    Source/WebCore:
+    
+    Fix MSE camera to WebGL texture uploads.
+    Partially revert r280963 for Cocoa MediaPlayer implementatations
+    that do not have nativeImageForCurrentTime / pixelBufferForCurrentTime.
+    Turns out MSE does not have these implemented, so currently fall back
+    to the painting path.
+    
+    Test: fast/mediastream/getUserMedia-to-canvas.html
+    
+    * html/canvas/WebGLRenderingContextBase.cpp:
+    (WebCore::WebGLRenderingContextBase::videoFrameToImage):
+    
+    LayoutTests:
+    
+    * fast/mediastream/getUserMedia-to-canvas-expected.txt: Added.
+    * fast/mediastream/getUserMedia-to-canvas.html: Added.
+    Add a test to test getting video frame to 2DContext and
+    WebGL canvas elements.
+    
+    * webrtc/routines.js:
+    Add a function to assert that ImageData contains
+    the simulated mock camera image. This is useful
+    in verifying that the image is exactly as
+    expected. The test tests only the default orientation.
+    Later changes will update the test to address the
+    camera rotation.
+    
+    git-svn-id: https://svn.webkit.org/repository/webkit/trunk@288025 268f45cc-cd09-0410-ab3c-d52691b4dbfc
+
+    2022-01-14  Kimmo Kinnunen  <kkinnu...@apple.com>
+
+            gl.texImage2D upload of getUserMedia streams via <video> element fails
+            https://bugs.webkit.org/show_bug.cgi?id=230617
+            <rdar://problem/83407577>
+
+            Reviewed by Youenn Fablet.
+
+            Fix MSE camera to WebGL texture uploads.
+            Partially revert r280963 for Cocoa MediaPlayer implementatations
+            that do not have nativeImageForCurrentTime / pixelBufferForCurrentTime.
+            Turns out MSE does not have these implemented, so currently fall back
+            to the painting path.
+
+            Test: fast/mediastream/getUserMedia-to-canvas.html
+
+            * html/canvas/WebGLRenderingContextBase.cpp:
+            (WebCore::WebGLRenderingContextBase::videoFrameToImage):
+
+2022-01-24  Alan Coon  <alanc...@apple.com>
+
         Cherry-pick r288015. rdar://problem/84617690
 
     [LFC][IFC] Incorrect root inline box position when non-empty atomic inline level child box has height: 0px

Modified: branches/safari-613-branch/Source/WebCore/html/canvas/WebGLRenderingContextBase.cpp (288503 => 288504)


--- branches/safari-613-branch/Source/WebCore/html/canvas/WebGLRenderingContextBase.cpp	2022-01-25 01:55:10 UTC (rev 288503)
+++ branches/safari-613-branch/Source/WebCore/html/canvas/WebGLRenderingContextBase.cpp	2022-01-25 01:55:14 UTC (rev 288504)
@@ -5909,6 +5909,7 @@
 
 RefPtr<Image> WebGLRenderingContextBase::videoFrameToImage(HTMLVideoElement* video, BackingStoreCopy backingStoreCopy, const char* functionName)
 {
+    ImageBuffer* imageBuffer = nullptr;
     // FIXME: When texImage2D is passed an HTMLVideoElement, implementations
     // interoperably use the native RGB color values of the video frame (e.g.
     // Rec.601 color space values) for the texture. But nativeImageForCurrentTime
@@ -5923,35 +5924,36 @@
     auto nativeImage = video->nativeImageForCurrentTime();
     // Currently we might be missing an image due to MSE not being able to provide the first requested frame.
     // https://bugs.webkit.org/show_bug.cgi?id=228997
-    if (!nativeImage)
-        return nullptr;
-    IntSize imageSize = nativeImage->size();
-    if (imageSize.isEmpty()) {
-        synthesizeGLError(GraphicsContextGL::INVALID_VALUE, functionName, "video visible size is empty");
-        return nullptr;
+    if (nativeImage) {
+        IntSize imageSize = nativeImage->size();
+        if (imageSize.isEmpty()) {
+            synthesizeGLError(GraphicsContextGL::INVALID_VALUE, functionName, "video visible size is empty");
+            return nullptr;
+        }
+        FloatRect imageRect { { }, imageSize };
+        ImageBuffer* imageBuffer = m_generatedImageCache.imageBuffer(imageSize, nativeImage->colorSpace(), CompositeOperator::Copy);
+        if (!imageBuffer) {
+            synthesizeGLError(GraphicsContextGL::OUT_OF_MEMORY, functionName, "out of memory");
+            return nullptr;
+        }
+        imageBuffer->context().drawNativeImage(*nativeImage, imageRect.size(), imageRect, imageRect, CompositeOperator::Copy);
     }
-    FloatRect imageRect { { }, imageSize };
-    ImageBuffer* imageBuffer = m_generatedImageCache.imageBuffer(imageSize, nativeImage->colorSpace(), CompositeOperator::Copy);
+#endif
     if (!imageBuffer) {
-        synthesizeGLError(GraphicsContextGL::OUT_OF_MEMORY, functionName, "out of memory");
-        return nullptr;
+        // This is a legacy code path that produces incompatible texture size when the
+        // video visible size is different to the natural size. This should be removed
+        // once all platforms implement nativeImageForCurrentTime().
+        IntSize videoSize { static_cast<int>(video->videoWidth()), static_cast<int>(video->videoHeight()) };
+        auto colorSpace = video->colorSpace();
+        if (!colorSpace)
+            colorSpace = DestinationColorSpace::SRGB();
+        imageBuffer = m_generatedImageCache.imageBuffer(videoSize, *colorSpace);
+        if (!imageBuffer) {
+            synthesizeGLError(GraphicsContextGL::OUT_OF_MEMORY, functionName, "out of memory");
+            return nullptr;
+        }
+        video->paintCurrentFrameInContext(imageBuffer->context(), { { }, videoSize });
     }
-    imageBuffer->context().drawNativeImage(*nativeImage, imageRect.size(), imageRect, imageRect, CompositeOperator::Copy);
-#else
-    // This is a legacy code path that produces incompatible texture size when the
-    // video visible size is different to the natural size. This should be removed
-    // once all platforms implement nativeImageForCurrentTime().
-    IntSize videoSize { static_cast<int>(video->videoWidth()), static_cast<int>(video->videoHeight()) };
-    auto colorSpace = video->colorSpace();
-    if (!colorSpace)
-        colorSpace = DestinationColorSpace::SRGB();
-    ImageBuffer* imageBuffer = m_generatedImageCache.imageBuffer(videoSize, *colorSpace);
-    if (!imageBuffer) {
-        synthesizeGLError(GraphicsContextGL::OUT_OF_MEMORY, functionName, "out of memory");
-        return nullptr;
-    }
-    video->paintCurrentFrameInContext(imageBuffer->context(), { { }, videoSize });
-#endif
     RefPtr<Image> image = imageBuffer->copyImage(backingStoreCopy);
     if (!image) {
         synthesizeGLError(GraphicsContextGL::OUT_OF_MEMORY, functionName, "out of memory");
_______________________________________________
webkit-changes mailing list
webkit-changes@lists.webkit.org
https://lists.webkit.org/mailman/listinfo/webkit-changes

Reply via email to