https://gcc.gnu.org/bugzilla/show_bug.cgi?id=69652

--- Comment #5 from Yuri Rumyantsev <ysrumyan at gmail dot com> ---
Jacub,

I'd like to clarify one your remark:

5) IMHO you should give up also for !is_gimple_assign, say trying to move an
elemental function call into the conditional is just wrong

What's wrong in call motion? Note that masked stores and loads are
also represented as call. I assume that likely simd clone function
calls msut not be moved.

Thanks ahead.
Yuri.

P.S. It means that my patch is not correct and should be fixed.

2016-02-04 17:48 GMT+03:00 Yuri Rumyantsev <ysrum...@gmail.com>:
> Jacub,
>
> Thanks a lot for your detail comments!
>
> I've just sent a patch for review to gcc-patches. Could you please
> take a look on it?
>
> Best regards.
> Yuri.
>
> 2016-02-03 20:22 GMT+03:00 jakub at gcc dot gnu.org 
> <gcc-bugzi...@gcc.gnu.org>:
>> https://gcc.gnu.org/bugzilla/show_bug.cgi?id=69652
>>
>> Jakub Jelinek <jakub at gcc dot gnu.org> changed:
>>
>>            What    |Removed                     |Added
>> ----------------------------------------------------------------------------
>>                  CC|jakub at redhat dot com            |
>>
>> --- Comment #3 from Jakub Jelinek <jakub at gcc dot gnu.org> ---
>> Clearly a bug in optimize_mask_stores.
>> At the start of that function we have:
>> ...
>> mask__46.14_129 = vect__14.9_121 != vect__21.12_127;
>> _46 = _14 != _21;
>> mask__ifc__47.15_130 = mask__46.14_129;
>> _ifc__47 = _46;
>> MASK_STORE (vectp.16_132, 8B, mask__ifc__47.15_130, vect__22.13_128);
>> vect__24.20_140 = MEM[(double *)vectp.18_138];
>> _24 = *_13;
>> vect__25.21_141 = vect__21.12_127 + vect__24.20_140;
>> _25 = _21 + _24;
>> MASK_STORE (vectp.22_145, 8B, mask__ifc__47.15_130, vect__25.21_141);
>> k_27 = k_28 + 1;
>> ...
>> Now, the MASK_STORE calls are processed from last to first, which is fine, we
>> first move the second MASK_STORE and the vector stmts that feed it:
>> vect__24.20_140 = MEM[(double *)vectp.18_138];
>> vect__25.21_141 = vect__21.12_127 + vect__24.20_140;
>> MASK_STORE (vectp.22_145, 8B, mask__ifc__47.15_130, vect__25.21_141);
>> but then continue trying to move the second MASK_STORE into the same
>> conditional block, because it has the same mask.  In this case it is wrong,
>> because there is
>> the scalar load in between (_24 = *_13) that just waits for DCE, but 
>> generally
>> there could be arbitrary code.
>>             /* Put other masked stores with the same mask to STORE_BB.  */
>>             if (worklist.is_empty ()
>>                 || gimple_call_arg (worklist.last (), 2) != mask
>>                 || worklist.last () != stmt1)
>>               break;
>> has a simplistic check (doesn't consider other MASK_STORE unless the walking
>> walked up to that stmt), but of course it doesn't work too well if some 
>> scalar
>> stmts were skipped.
>>
>> I see various issues in that function:
>> 1) wrong formatting:
>>           gsi_to = gsi_start_bb (store_bb);
>>           if (dump_enabled_p ())
>>             {
>>               dump_printf_loc (MSG_NOTE, vect_location,
>>                                "Move stmt to created bb\n");
>>               dump_gimple_stmt (MSG_NOTE, TDF_SLIM, last, 0);
>>             }
>>             /* Move all stored value producers if possible.  */
>>             while (!gsi_end_p (gsi))
>>               {
>> The Move all stored value and everything below up to corresponding closing }
>> should be moved two columns to the left
>> 2) IMHO stmt1 should be set to NULL before that while (!gsi_end_p (gsi)),
>> as the function is prepared to handle multiple bbs
>> 3) next to gimple_vdef non-NULL break IMHO should be also
>> gimple_has_volatile_ops -> break check, just for safety, we don't wanto to
>> mishandle say volatile reads etc.
>> 4) you have to skip over debug stmts if there are any, otherwise we have a
>> -fcompare-debug issue
>> 5) IMHO you should give up also for !is_gimple_assign, say trying to move an
>> elemental function call into the conditional is just wrong
>> 6) the
>>                 /* Skip scalar statements.  */
>>                 if (!VECTOR_TYPE_P (TREE_TYPE (lhs)))
>>                   continue;
>> should be reconsidered.  IMHO if you have scalar stmts that feed just the 
>> stmts
>> in the STORE_BB, there is no reason not to move them too, if you have scalar
>> stmts that feed other stmts too, IMHO you should give up on them if they 
>> have a
>> vuse; what code did you have in mind when adding the !VECTOR_TYPE_P check?
>> 7) FOR_EACH_IMM_USE_FAST loop should ignore debug stmts, at least for 
>> decisions
>> when to stop in some stmt; bet the debug stmts if there are any need to be
>> reset
>> if we move the def stmt that they are using, otherwise we risk 
>> -fcompare-debug
>> issues
>> 8) the worklist.last () != stmt1 check need to be -fcompare-debug friendly 
>> too,
>> so if there are debug stmts in between the last moved stmt and the previous
>> MASK_STORE, we need to handle it as if there aren't any debug stmts in 
>> between
>>
>> --
>> You are receiving this mail because:
>> You are on the CC list for the bug.

Reply via email to