https://gcc.gnu.org/bugzilla/show_bug.cgi?id=105617

--- Comment #11 from Michael_S <already5chosen at yahoo dot com> ---
(In reply to Richard Biener from comment #10)
> (In reply to Hongtao.liu from comment #9)
> > (In reply to Hongtao.liu from comment #8)
> > > (In reply to Hongtao.liu from comment #7)
> > > > Hmm, we have specific code to add scalar->vector(vmovq) cost to vector
> > > > construct, but it seems not to work here, guess it's because &r0,and 
> > > > thought
> > > > it was load not scalar? 
> > > Yes, true for as gimple_assign_load_p
> > > 
> > > 
> > > (gdb) p debug_gimple_stmt (def)
> > > 72# VUSE <.MEM_46>
> > > 73r0.0_20 = r0;
> > It's a load from stack, and finally eliminated in rtl dse1, but here the
> > vectorizer doesn't know.
> 
> Yes, it's difficult for the SLP vectorizer to guess whether rN will come
> from memory or not.  Some friendlier middle-end representation for
> add-with-carry might be nice - the x86 backend could for example fold
> __builtin_ia32_addcarryx_u64 to use a _Complex unsinged long long for the
> return, ferrying the carry in __imag.  Alternatively we could devise
> some special GIMPLE_ASM kind ferrying RTL and not assembly so the
> backend could fold it directly to RTL on GIMPLE with asm constraints
> doing the plumbing ... (we'd need some match-scratch and RTL expansion
> would still need to allocate the actual pseudos).
> 
>   <bb 2> [local count: 1073741824]:
>   _1 = *srcB_17(D);
>   _2 = *srcA_18(D);
>   _30 = __builtin_ia32_addcarryx_u64 (0, _2, _1, &r0);
>   _3 = MEM[(const uint64_t *)srcB_17(D) + 8B];
>   _4 = MEM[(const uint64_t *)srcA_18(D) + 8B];
>   _5 = (int) _30;
>   _29 = __builtin_ia32_addcarryx_u64 (_5, _4, _3, &r1);
>   _6 = MEM[(const uint64_t *)srcB_17(D) + 16B];
>   _7 = MEM[(const uint64_t *)srcA_18(D) + 16B];
>   _8 = (int) _29;
>   _28 = __builtin_ia32_addcarryx_u64 (_8, _7, _6, &r2);
>   _9 = MEM[(const uint64_t *)srcB_17(D) + 24B];
>   _10 = MEM[(const uint64_t *)srcA_18(D) + 24B];
>   _11 = (int) _28;
>   __builtin_ia32_addcarryx_u64 (_11, _10, _9, &r3);
>   r0.0_12 = r0;
>   r1.1_13 = r1;
>   _36 = {r0.0_12, r1.1_13};
>   r2.2_14 = r2;
>   r3.3_15 = r3;
>   _37 = {r2.2_14, r3.3_15};
>   vectp.9_35 = dst_19(D);
>   MEM <vector(2) long unsigned int> [(uint64_t *)vectp.9_35] = _36;
>   vectp.9_39 = vectp.9_35 + 16;
>   MEM <vector(2) long unsigned int> [(uint64_t *)vectp.9_39] = _37;
> 
> so for the situation at hand I don't see any reasonable way out that
> doesn't have the chance of regressing things in other places (like
> treat loads from non-indexed auto variables specially or so).  The
> only real solution is to find a GIMPLE representation for
> __builtin_ia32_addcarryx_u64 that doesn't force the alternate output
> to memory.


How about simple heuristic: "Never auto-vectorize integer code unless in inner
loop"?
It would be optimal >80% of time and even in cases where it's sub-optimal the
impact is likely single-digit per cents. On the other hand, the impact of
mistake in the opposit direction (i.e. over-vectorization) is often quite
large.

Reply via email to