"Yangfei (Felix)" <felix.y...@huawei.com> writes:
> Hi!
>> -----Original Message-----
>> From: Yangfei (Felix)
>> Sent: Monday, March 30, 2020 5:28 PM
>> To: gcc-patches@gcc.gnu.org
>> Cc: 'rguent...@suse.de' <rguent...@suse.de>
>> Subject: [PATCH] ICE: in vectorizable_load, at tree-vect-stmts.c:9173
>> 
>> Hi,
>> 
>> New bug: https://gcc.gnu.org/bugzilla/show_bug.cgi?id=94398
>> 
>> With -mstrict-align, aarch64_builtin_support_vector_misalignment will returns
>> false when misalignment factor is unknown at compile time.
>> Then vect_supportable_dr_alignment returns dr_unaligned_unsupported,
>> which triggers the ICE.  I have pasted the call trace on the bug report.
>> 
>> vect_supportable_dr_alignment is expected to return either dr_aligned or
>> dr_unaligned_supported for masked operations.
>> But it seems that this function only catches internal_fn IFN_MASK_LOAD &
>> IFN_MASK_STORE.
>> We are emitting a mask gather load here for this test case.
>> As backends have their own vector misalignment support policy, I am supposing
>> this should be better handled in the auto-vect shared code.
>> 
>
> I can only reply to comment on the bug here as my account got locked by the 
> bugzilla system for now. 

Sorry to hear that.  What was the reason?

> The way Richard (rsandifo) suggests also works for me and I agree it should 
> be more consistent and better for compile time. 
> One question here: when will a IFN_MASK_LOAD/IFN_MASK_STORE be passed to 
> vect_supportable_dr_alignment? 

I'd expect that to happen in the same cases as for unmasked load/stores.
It certainly will for unconditional loads and stores that get masked
via full-loop masking.

In principle, the same rules apply to both masked and unmasked accesses.
But for SVE, both masked and unmasked accesses should support misalignment,
which is why I think the current target hook is probably wrong for
SVE + -mstrict-align.

> @@ -8051,8 +8051,15 @@ vectorizable_store (stmt_vec_info stmt_info, 
> gimple_stmt_iterator *gsi,
>    auto_vec<tree> dr_chain (group_size);
>    oprnds.create (group_size);
>
> -  alignment_support_scheme
> -    = vect_supportable_dr_alignment (first_dr_info, false);
> +  /* Strided accesses perform only component accesses, alignment
> +     is irrelevant for them.  */
> +  if (STMT_VINFO_STRIDED_P (first_dr_info->stmt)
> +      && !STMT_VINFO_GROUPED_ACCESS (first_dr_info->stmt))

I think this should be based on memory_access_type == VMAT_GATHER_SCATTER
instead.  At this point, STMT_VINFO_* describes properties of the original
scalar access(es) while memory_access_type describes the vector
implementation strategy.  It's the latter that matters here.

Same thing for loads.

Thanks,
Richard

> +    alignment_support_scheme = dr_unaligned_supported;
> +  else
> +    alignment_support_scheme
> +      = vect_supportable_dr_alignment (first_dr_info, false);
> +
>    gcc_assert (alignment_support_scheme);
>    vec_loop_masks *loop_masks
>      = (loop_vinfo && LOOP_VINFO_FULLY_MASKED_P (loop_vinfo)
> @@ -9168,8 +9175,15 @@ vectorizable_load (stmt_vec_info stmt_info, 
> gimple_stmt_iterator *gsi,
>        ref_type = reference_alias_ptr_type (DR_REF (first_dr_info->dr));
>      }
>
> -  alignment_support_scheme
> -    = vect_supportable_dr_alignment (first_dr_info, false);
> +  /* Strided accesses perform only component accesses, alignment
> +     is irrelevant for them.  */
> +  if (STMT_VINFO_STRIDED_P (first_dr_info->stmt)
> +      && !STMT_VINFO_GROUPED_ACCESS (first_dr_info->stmt))
> +    alignment_support_scheme = dr_unaligned_supported;
> +  else
> +    alignment_support_scheme
> +      = vect_supportable_dr_alignment (first_dr_info, false);
> +
>    gcc_assert (alignment_support_scheme);
>    vec_loop_masks *loop_masks
>      = (loop_vinfo && LOOP_VINFO_FULLY_MASKED_P (loop_vinfo)

Reply via email to