On Mon, Mar 14, 2022 at 8:20 PM Hongtao Liu <crazy...@gmail.com> wrote:
>
> On Mon, Mar 14, 2022 at 7:25 PM Jakub Jelinek <ja...@redhat.com> wrote:
> >
> > On Sun, Mar 13, 2022 at 09:34:10PM +0800, Hongtao Liu wrote:
> > > LGTM, thanks for handling this.
> >
> > Thanks, committed.
> >
> > > > Note, while the Intrinsics guide for _mm_loadu_si32 says SSE2,
> > > > for _mm_loadu_si16 it says strangely SSE.  But the intrinsics
> > > > returns __m128i, which is only defined in emmintrin.h, and
> > > > _mm_set_epi16 is also only SSE2 and later in emmintrin.h.
> > > > Even clang defines it in emmintrin.h and ends up with inlining
> > > > failure when calling _mm_loadu_si16 from sse,no-sse2 function.
> > > > So, isn't that a bug in the intrinsic guide instead?
> > > I think it's a bug, it's supposed to generate movzx + movd, and movd
> > > is under sse2, and have reported it to the colleague who maintains
> > > Intel intrinsic guide.
> > >
> > > Similar bug for
> > > _mm_loadu_si64
> > > _mm_storeu_si16
> > > _mm_storeu_si64
> >
> > Currently it emits pxor + pinsrw, but even those are SSE2 instructions,
> > unless they use a MMX register (then it is MMX and SSE).
> > I agree that movzwl + movd seems better than pxor + pinsrw though.
> > So, do we want to help it a little bit then?  Like:
> >
> > 2022-03-14  Jakub Jelinek  <ja...@redhat.com>
> >
> >         * config/i386/eemintrin.h (_mm_loadu_si16): Use _mm_set_epi32 
> > instead
> >         of _mm_set_epi16 and zero extend the memory load.
> >
> >         * gcc.target/i386/pr95483-1.c: Use -msse2 instead of -msse in
> >         dg-options, allow movzwl+movd instead of pxor with pinsrw.
> >
> > --- gcc/config/i386/emmintrin.h.jj      2022-03-14 10:44:29.402617685 +0100
> > +++ gcc/config/i386/emmintrin.h 2022-03-14 11:58:18.062666257 +0100
> > @@ -724,7 +724,7 @@ _mm_loadu_si32 (void const *__P)
> >  extern __inline __m128i __attribute__((__gnu_inline__, __always_inline__, 
> > __artificial__))
> >  _mm_loadu_si16 (void const *__P)
> >  {
> > -  return _mm_set_epi16 (0, 0, 0, 0, 0, 0, 0, (*(__m16_u *)__P)[0]);
> > +  return _mm_set_epi32 (0, 0, 0, (unsigned short) ((*(__m16_u *)__P)[0]));
> >  }
> Under avx512fp16,  the former directly generates vmovw, but the latter
> still generates movzx + vmovd. There's still a miss optimization.
> Thus I prefer to optimize it in the backend pxor + pinsrw -> movzx +
> movd -> vmovw (under avx512fp16).
> I'll open a PR for that and optimize it in GCC13.
PR104915.
> >
> >  extern __inline void __attribute__((__gnu_inline__, __always_inline__, 
> > __artificial__))
> > --- gcc/testsuite/gcc.target/i386/pr95483-1.c.jj        2020-10-14 
> > 22:05:19.380856952 +0200
> > +++ gcc/testsuite/gcc.target/i386/pr95483-1.c   2022-03-14 
> > 12:11:07.716891710 +0100
> > @@ -1,7 +1,7 @@
> >  /* { dg-do compile } */
> > -/* { dg-options "-O2 -msse" } */
> > -/* { dg-final { scan-assembler-times "pxor\[ 
> > \\t\]+\[^\n\]*%xmm\[0-9\]+\[^\n\]*%xmm\[0-9\]+(?:\n|\[ \\t\]+#)" 1 } } */
> > -/* { dg-final { scan-assembler-times "pinsrw\[ 
> > \\t\]+\[^\n\]*%xmm\[0-9\]+(?:\n|\[ \\t\]+#)" 1 } } */
> > +/* { dg-options "-O2 -msse2" } */
> > +/* { dg-final { scan-assembler-times "(?:movzwl\[ \\t\]+\[^\n\]*|pxor\[ 
> > \\t\]+\[^\n\]*%xmm\[0-9\]+\[^\n\]*%xmm\[0-9\]+)(?:\n|\[ \\t\]+#)" 1 } } */
> > +/* { dg-final { scan-assembler-times "(?:movd|pinsrw)\[ 
> > \\t\]+\[^\n\]*%xmm\[0-9\]+(?:\n|\[ \\t\]+#)" 1 } } */
> >  /* { dg-final { scan-assembler-times "pextrw\[ 
> > \\t\]+\[^\n\]*%xmm\[0-9\]+\[^\n\]*(?:\n|\[ \\t\]+#)" 1 } } */
> >
> >
> >
> >
> >         Jakub
> >
>
>
> --
> BR,
> Hongtao



-- 
BR,
Hongtao

Reply via email to