Hi Lehua,

thanks for starting with the refactoring.  I have some minor comments.

> +/* The value means the number of operands for insn_expander.  */
>  enum insn_type
>  {
>    RVV_MISC_OP = 1,
>    RVV_UNOP = 2,
> -  RVV_UNOP_M = RVV_UNOP + 2,
> -  RVV_UNOP_MU = RVV_UNOP + 2,
> -  RVV_UNOP_TU = RVV_UNOP + 2,
> -  RVV_UNOP_TUMU = RVV_UNOP + 2,
> +  RVV_UNOP_MASK = RVV_UNOP + 2,

Cleanup up here is good, right now it's not really an insn_type but
indeed just the number of operands.  My original idea was to have an
insn type and a mostly unified expander that performs all necessary
operations depending on the insn_type.  Just to give an idea of why it's
called that way.

> +  rtx ops[RVV_BINOP_MASK] = {target, mask, target, op, sel};
> +  emit_vlmax_masked_mu_insn (icode, RVV_BINOP_MASK, ops);

One of the ideas was that a function emit_vlmax_masked_mu_insn would already
know that it's dealing with a mask and we would just pass something like
RVV_BINOP.  The other way would be to just have emit_vlmax_mu_insn or
something and let the rest be deduced from the insn_type.  Even the vlmax
I intended to have mostly implicit but that somehow got lost during
refactorings :)  No need to change anything for now, just for perspective
again. 

> -/* Expand unary ops COND_LEN_*.  */
> -void
> -expand_cond_len_unop (rtx_code code, rtx *ops)
> +/* Subroutine to expand COND_LEN_* patterns.  */
> +static void
> +expand_cond_len_op (rtx_code code, unsigned icode, int op_num, rtx *cond_ops,
> +                 rtx len)
>  {

Would you mind renaming op_num (i.e. usually understood as operand_number) into
num_ops or nops? (i.e. number of operands).  That way we would be more in line 
of
what the later expander functions do.

> -  rtx dest = ops[0];
> -  rtx mask = ops[1];
> -  rtx src = ops[2];
> -  rtx merge = ops[3];
> -  rtx len = ops[4];
> +  rtx dest = cond_ops[0];
> +  rtx mask = cond_ops[1];

I would actually prefer to keep "ops" because it's already clear from the
function name that we work with a conditional function (and we don't have
any other ops).

>  
> +/* Expand unary ops COND_LEN_*.  */
> +void
> +expand_cond_len_unop (rtx_code code, rtx *ops)
> +{
> +  rtx dest = ops[0];
> +  rtx mask = ops[1];
> +  rtx src = ops[2];
> +  rtx merge = ops[3];
> +  rtx len = ops[4];
> +
> +  machine_mode mode = GET_MODE (dest);
> +  insn_code icode = code_for_pred (code, mode);
> +  rtx cond_ops[RVV_UNOP_MASK] = {dest, mask, merge, src};
> +  expand_cond_len_op (code, icode, RVV_UNOP_MASK, cond_ops, len);
> +}

We're already a bit inconsistent with how we pasds mask, merge and the source
operands.  Maybe we could also unify this a bit?  I don't have a clear
preference for either, though.

> +  rtx cond_ops[RVV_BINOP_MASK] = {dest, mask, merge, src1, src2};

Here, the merge comes before the sources as well.

> +  rtx cond_ops[RVV_TERNOP_MASK] = {dest, mask, src1, src2, src3, merge};
And here, the merge comes last.  I realize this makes sense in the context
of a ternary operation because the merge is always "real".  As our vector
patterns are similar, maybe we should use this ordering all the time?

Regards
 Robin

Reply via email to