On Thu, Feb 17, 2022 at 11:04:04AM +0100, Alan Kelly wrote: > The main loop processes blocks of 16 pixels. The tail processes blocks > of size 4. > --- > libswscale/x86/scale_avx2.asm | 48 +++++++++++++++++++++++++++++++++-- > 1 file changed, 46 insertions(+), 2 deletions(-) > > diff --git a/libswscale/x86/scale_avx2.asm b/libswscale/x86/scale_avx2.asm > index 20acdbd633..dc42abb100 100644 > --- a/libswscale/x86/scale_avx2.asm > +++ b/libswscale/x86/scale_avx2.asm > @@ -53,6 +53,9 @@ cglobal hscale8to15_%1, 7, 9, 16, pos0, dst, w, srcmem, > filter, fltpos, fltsize, > mova m14, [four] > shr fltsized, 2 > %endif > + cmp wq, 16 > + jl .tail_loop > + mov countq, 0x10 > .loop: > movu m1, [fltposq] > movu m2, [fltposq+32] > @@ -97,11 +100,52 @@ cglobal hscale8to15_%1, 7, 9, 16, pos0, dst, w, srcmem, > filter, fltpos, fltsize, > vpsrad m6, 7 > vpackssdw m5, m5, m6 > vpermd m5, m15, m5 > - vmovdqu [dstq + countq * 2], m5 > + vmovdqu [dstq], m5 > + add dstq, 0x20 > add fltposq, 0x40 > add countq, 0x10 > cmp countq, wq > - jl .loop > + jle .loop > + > + sub countq, 0x10 > + cmp countq, wq > + jge .end > + > +.tail_loop: > + movu xm1, [fltposq] > +%ifidn %1, X4 > + pxor xm9, xm9 > + pxor xm10, xm10 > + xor innerq, innerq > +.tail_innerloop: > +%endif > + vpcmpeqd xm13, xm13 > + vpgatherdd xm3,[srcmemq + xm1], xm13 > + vpunpcklbw xm5, xm3, xm0 > + vpunpckhbw xm6, xm3, xm0 > + vpmaddwd xm5, xm5, [filterq] > + vpmaddwd xm6, xm6, [filterq + 16] > + add filterq, 0x20 > +%ifidn %1, X4 > + paddd xm9, xm5 > + paddd xm10, xm6 > + paddd xm1, xm14 > + add innerq, 1 > + cmp innerq, fltsizeq > + jl .tail_innerloop > + vphaddd xm5, xm9, xm10 > +%else > + vphaddd xm5, xm5, xm6 > +%endif > + vpsrad xm5, 7 > + vpackssdw xm5, xm5, xm5 > + vmovq [dstq], xm5 > + add dstq, 0x8 > + add fltposq, 0x10 > + add countq, 0x4 > + cmp countq, wq > + jl .tail_loop > +.end: > REP_RET > %endmacro
countq is only used as counter after this If you count against 0 this reduces the instructions in the loop from add/cmp to just add. similarly the previously used [dstq + countq * 2] avoids a add can you comment on the performance impact of these changes ? On previous generations of CPUs this would have been generally slower I havnt really optimized ASM for current CPUs so these comments might not apply today but noone else seems reviewing this thx [...] -- Michael GnuPG fingerprint: 9FF2128B147EF6730BADF133611EC787040B0FAB The worst form of inequality is to try to make unequal things equal. -- Aristotle
signature.asc
Description: PGP signature
_______________________________________________ ffmpeg-devel mailing list ffmpeg-devel@ffmpeg.org https://ffmpeg.org/mailman/listinfo/ffmpeg-devel To unsubscribe, visit link above, or email ffmpeg-devel-requ...@ffmpeg.org with subject "unsubscribe".