On Tue, 16 Aug 2022, Hubert Mazur wrote:

Provide optimized implementation of pix_abs16_y2 function for arm64.

Performance comparison tests are shown below.
pix_abs_0_2_c: 317.2
pix_abs_0_2_neon: 37.5

Benchmarks and tests run with checkasm tool on AWS Graviton 3.

Signed-off-by: Hubert Mazur <h...@semihalf.com>
---
libavcodec/aarch64/me_cmp_init_aarch64.c |  3 +
libavcodec/aarch64/me_cmp_neon.S         | 75 ++++++++++++++++++++++++
2 files changed, 78 insertions(+)

diff --git a/libavcodec/aarch64/me_cmp_init_aarch64.c 
b/libavcodec/aarch64/me_cmp_init_aarch64.c
index 955592625a..1c36d3d7cb 100644
--- a/libavcodec/aarch64/me_cmp_init_aarch64.c
+++ b/libavcodec/aarch64/me_cmp_init_aarch64.c
@@ -29,6 +29,8 @@ int ff_pix_abs16_xy2_neon(MpegEncContext *s, const uint8_t 
*blk1, const uint8_t
                      ptrdiff_t stride, int h);
int ff_pix_abs16_x2_neon(MpegEncContext *v, const uint8_t *pix1, const uint8_t 
*pix2,
                      ptrdiff_t stride, int h);
+int ff_pix_abs16_y2_neon(MpegEncContext *v, const uint8_t *pix1, const uint8_t 
*pix2,
+                      ptrdiff_t stride, int h);

Misaligned function declaration.

diff --git a/libavcodec/aarch64/me_cmp_neon.S b/libavcodec/aarch64/me_cmp_neon.S
index 367924b3c2..0ec9c0465b 100644
--- a/libavcodec/aarch64/me_cmp_neon.S
+++ b/libavcodec/aarch64/me_cmp_neon.S
@@ -404,3 +404,78 @@ function sse4_neon, export=1

        ret
endfunc
+
+function ff_pix_abs16_y2_neon, export=1

Why place this new function at the bottom of the file, instead of logically following the other preexisting pix_abs16 function? In the version I pushed, I moved it further up

+        // x0           unused
+        // x1           uint8_t *pix1
+        // x2           uint8_t *pix2
+        // x3           ptrdiff_t stride
+        // x4           int h

This should be w4. You had fixed this in a couple patches, but missed this one.

+
+        // initialize buffers
+        movi            v29.8h, #0                      // clear the 
accumulator
+        movi            v28.8h, #0                      // clear the 
accumulator
+        movi            d18, #0

Unused d18 here too


+        add             x5, x2, x3                      // pix2 + stride
+        cmp             w4, #4
+        b.lt            2f
+
+// make 4 iterations at once
+1:
+
+        // abs(pix1[0], avg2(pix2[0], pix2[0 + stride]))
+        // avg2(a, b) = (((a) + (b) + 1) >> 1)
+        // abs(x) = (x < 0 ? (-x) : (x))
+
+        ld1             {v1.16b}, [x2], x3              // Load pix2 for first 
iteration
+        ld1             {v2.16b}, [x5], x3              // Load pix3 for first 
iteration
+        urhadd          v30.16b, v1.16b, v2.16b         // Rounding halving 
add, first iteration
+        ld1             {v0.16b}, [x1], x3              // Load pix1 for first 
iteration
+        uabal           v29.8h, v0.8b, v30.8b           // Absolute difference 
of lower half, first iteration

This whole first sequence is almost entirely blocking, waiting for the result of the previous operation - did you miss to interleave this with the rest of the operations?

Normally I wouldn't bother with minor interleaving details, but here the impact was rather big. I manually reinterleaved the whole function, and got this speedup:

Before:       Cortex A53    A72     A73
pix_abs_0_2_neon:  153.0   63.7    52.7
After:
pix_abs_0_2_neon:  141.0   61.7    51.7

// Martin

_______________________________________________
ffmpeg-devel mailing list
ffmpeg-devel@ffmpeg.org
https://ffmpeg.org/mailman/listinfo/ffmpeg-devel

To unsubscribe, visit link above, or email
ffmpeg-devel-requ...@ffmpeg.org with subject "unsubscribe".

Reply via email to