Hi,
this patch adds some missing tests for vf[nw]cvt.
Regards
Robin
gcc/testsuite/ChangeLog:
* gcc.target/riscv/rvv/autovec/conversions/vfncvt-ftoi-run.c:
Add tests.
* gcc.target/riscv/rvv/autovec/conversions/vfncvt-ftoi-rv32gcv.c:
Ditto.
*
Hi,
this patch enables pressure-aware scheduling for riscv. There have been
various requests for it so I figured I'd just go ahead and send
the patch.
There is some slight regression in code quality for a number of
vector tests where we spill more due to different instructions order.
The ones I
> This little patch fixs the -march error of a zhinxmin testcase I added earlier
> and an old zhinxmin testcase, since these testcases are for zhinxmin extension
> and not zfhmin extension.
Arg, I should have noticed that ;)
OK, of course.
Regards
Robin
Indeed all ANYLSF patterns have TARGET_HARD_FLOAT (==f extension) which
is incompatible with ZHINX or ZHINXMIN anyway. That should really be fixed
separately or at least clarified, maybe I'm missing something.
Still we can go forward with the patch itself as it improves things
independently, so
OK, thanks.
Regards
Robin
Hi Lehua,
> XPASS: gcc.target/riscv/rvv/autovec/partial/slp-1.c scan-assembler \\tvand
> XPASS: gcc.target/riscv/rvv/autovec/partial/slp-1.c scan-assembler \\tvand
> XPASS: gcc.target/riscv/rvv/autovec/partial/slp-1.c scan-assembler \\tvand
> XPASS: gcc.target/riscv/rvv/autovec/partial/slp-1.c
Hi Lehua,
thanks for fixing this. Looks like the same reason we have the
separation of zvfh and zvfhmin for vector loads/stores.
> +;; Iterator for hardware-supported load/store floating-point modes.
> +(define_mode_iterator ANYLSF [(SF "TARGET_HARD_FLOAT || TARGET_ZFINX")
> +
Hi Lehua,
unrelated but I'm seeing a lot of failing gather/scatter tests on
master right now.
> /* DIRTY -> DIRTY or VALID -> DIRTY. */
> + if (block_info.reaching_out.demand_p (DEMAND_NONZERO_AVL)
> + && vlmax_avl_p (prop.get_avl ()))
> +
> I'm not opposed to merging the test change, but I couldn't figure out
> where in C the implicit conversion was coming from: as far as I can
> tell the macros don't introduce any (it's "return _float16 *
> _float16"), I'd had the patch open since last night but couldn't
> figure it out.
>
> We
> But if it's a float16 precision issue then I would have expected both
> the computations for the lhs and rhs values to have suffered
> similarly.
Yeah, right. I didn't look closely enough. The problem is not the
reduction but the additional return-value conversion that is omitted
when
> However:
>
> | #define vec_extract_direct { 3, 3, false }
>
> This looks wrong. The numbers are argument numbers (or -1 for a return
> value). vec_extract only takes 2 arguments, so 3 looks to be out-of-range.
>
> | #define direct_vec_extract_optab_supported_p direct_optab_supported_p
>
>
Hi,
this patch changes the equality check for the reduc_strict_run-1
testcase from == to fabs () < EPS. The FAIL only occurs with
_Float16 but I'd argue approximate equality is preferable for all
float modes.
Regards
Robin
gcc/testsuite/ChangeLog:
*
Hi,
this patch fixes the case where vec_extract gets passed a promoted
subreg (e.g. from a return value). When such a subreg is the
destination of a vector extraction we create a separate pseudo
register and ensure that the necessary promotion is performed
afterwards.
Before this patch a
> Plz put your testcases into:
>
> # widening operation only test on LMUL < 8
> set AUTOVEC_TEST_OPTS [list \
> {-ftree-vectorize -O3 --param riscv-autovec-lmul=m1} \
> {-ftree-vectorize -O3 --param riscv-autovec-lmul=m2} \
> {-ftree-vectorize -O3 --param riscv-autovec-lmul=m4} \
>
> Currently, autovec_length_operand predicate incorrect configuration is
> discovered in PR110989 since this following situation:
In case you haven't committed it yet: This is OK.
Regards
Robin
Hi Kewen,
> I did a bootstrapping and regression testing on Power10 (LE) and found a lot
> of failures.
I think the problem is that just like for vec_set we're expecting
the vec_extract expander not to fail. It is probably passed not a
const int here anymore and therefore fails to expand?
> Is this patch ok ? Maybe we can find a way to add a target specific
> fortran test but should not block this bug fix.
It's not much different than adding a C testcase actually, apart from
starting comments with a !
But well, LGTM. The test doesn't look that complicated and quite likely
is
> Hmm, I think VEC_EXTRACT and VEC_SET should be ECF_CONST. Maybe the
> GIMPLE ISEL
> comments do not match the implementation, but then that should be fixed?
>
> /* Expand all ARRAY_REF(VIEW_CONVERT_EXPR) gimple assignments into calls
> to
>internal function based on vector type of
OK.
Regards
Robin
Is the testcase already in the test suite? If not we should add it.
Apart from that LGTM.
Regards
Robin
Yeah, thanks, better in this separate patch.
OK.
Regards
Robin
OK, thanks.
Regards
Robin
> We seem to be looking at promotions of the call argument, lhs_type
> is the same as the type of the call LHS. But the comment mentions .POPCOUNT
> and the following code also handles others, so maybe handling should be
> moved. Also when we look to vectorize popcount (x) instead of
> Presumably this is an alternative to the approach Juzhe posted a week
> or two ago and ultimately dropped?
Yeah, I figured having a generic fallback could help more targets.
We can still have a better expander if we see the need.
Regards
Robin
> Hmm, the conversion should be a separate statement so I wonder
> why it would go wrong?
It is indeed. Yet, lhs_type is the lhs type of the conversion
and not the call and consequently we compare the precision of
the converted type with the popcount input.
So we should probably rather do
Hi Juzhe,
just some nits.
> - else if (rtx_equal_p (step, constm1_rtx) && poly_int_rtx_p (base, )
> + else if (rtx_equal_p (step, constm1_rtx)
> +&& poly_int_rtx_p (base, )
Looks like just a line-break change and the line is not too long?
> - rtx ops[] = {dest, vid, gen_int_mode
> Well, not sure how VECT_COMPARE_COSTS can help here, we either
> get the pattern or vectorize the original function. There's no special
> handling
> for popcount in vectorizable_call so all special cases are handled via
> patterns.
> I was thinking of popcounthi via popcountsi and zero-extend
> Could you please help to share how to enable checks here?
Build with --enable-checking or rather --enable-checking=extra.
Regards
Robin
of a proper search term. Also, I figured the 2-byte repeating
sequences might be trickier anyway and therefore kept it as is.
If you find it too cumbersome I can look for an alternative.
Right now it closely matches what the example C code says which
is not too bad IMHO.
Regards
Robin
>From 03d7e9533
Hi,
This patch adds a fallback when the backend does not provide a popcount
implementation. The algorithm is the same one libgcc uses, as well as
match.pd for recognizing a popcount idiom. __builtin_ctz and __builtin_ffs
can also rely on popcount so I used the fallback for them as well.
Hi Juzhe,
thanks, looks good from my side.
> +/* { dg-final { scan-assembler-times {vand\.vi\s+v[0-9]+,\s*v[0-9]+,\s*-16}
> 42 } } */
> +/* { dg-final { scan-assembler-not {csrr} } } */
I was actually looking for a scan-assembler-not vsetvli... but the
csrr will do as well.
Regards
Robin
Hi,
originally inspired by the wish to transform
vmv v3, a0 ; = vec_duplicate
vadd.vv v1, v2, v3
into
vadd.vx v1, v2, a0
via fwprop for riscv, this patch enables the forward propagation
of UNARY_P sources.
As this involves potentially replacing a vector register with
a scalar register the
Hi Juzhe,
I would find it a bit clearer if the prepare_ternay part were a
separate patch. As it's mostly mechanical replacements I don't
mind too much, though so it's LGTM from my side without that.
As to the lmul = 8 ICE, is the problem that the register allocator
would actually need 5
> 1. How do you model round to +Inf (avg_floor) and round to -Inf (avg_ceil) ?
That's just specified by the +1 or the lack of it in the original pattern.
Actually the IFN is just a detour because we would create perfect code
if not for the fallback. But as there is currently now way to check for
Hi,
this patch adds vector average patterns
op[0] = (narrow) ((wide) op[1] + (wide) op[2]) >> 1;
op[0] = (narrow) ((wide) op[1] + (wide) op[2] + 1) >> 1;
If there is no direct support, the vectorizer can synthesize the patterns
but, presumably due to lack of narrowing operation support, won't
Hi Joern,
thanks, I believe this will help with testing.
> +proc check_effective_target_riscv_v { } {
> +return [check_no_compiler_messages riscv_ext_v assembly {
> + #ifndef __riscv_v
> + #error "Not __riscv_v"
> + #endif
> +}]
> +}
This can be replaced by riscv_vector
>>> I'm not against continuing with the more well-known approach for now
>>> but we should keep in mind that might still be potential for improvement.
>
> No. I don't think it's faster.
I did a quick check on my x86 laptop and it's roughly 25% faster there.
That's consistent with the literature.
> +/* FIXME: We don't allow vectorize "__builtin_popcountll" yet since it needs
> "vec_pack_trunc" support
> + and such pattern may cause inferior codegen.
> + We will enable "vec_pack_trunc" when we support reasonable vector
> cost model. */
Wait, why do we need vec_pack_trunc
Hi Juzhe,
> +/* Expand Vector POPCOUNT by parallel popcnt:
> +
> + int parallel_popcnt(uint32_t n) {
> + #define POW2(c) (1U << (c))
> + #define MASK(c) (static_cast(-1) / (POW2(POW2(c)) + 1U))
> + #define COUNT(x, c) ((x) & MASK(c)) + (((x)>>(POW2(c))) & MASK(c))
> + n =
> +;; -
> +;; Duplicate Operations
> +;; -
> +
> +(define_insn_and_split "@vec_duplicate"
> + [(set (match_operand:VLS 0 "register_operand")
> +
> This is a draft patch. I would like to explain it's hard to make the
> simplify generic and ask for some help.
>
> There're 2 categories we need to optimize.
>
> - The op in optab such as div / 1.
> - The unspec operation such as mulh * 0, (vadc+vmadc) + 0.
>
> Especially for the unspec
rom 65e69834eeb08ba093786e386ac16797cec4d8a7 Mon Sep 17 00:00:00 2001
From: Robin Dapp
Date: Mon, 24 Jul 2023 16:25:38 +0200
Subject: [PATCH] gcse: Extract reg pressure handling into separate file.
This patch extracts the hoist-pressure handling from gcse into a separate
file so it can be used by other passes in the fut
Hi Pan,
thanks for your patience and your work. Apart from my general doubt
whether mode-changing intrinsics are a good idea, I don't have other
remarks that need fixing. What I mentioned before:
- Handling of asms wouldn't be a huge change. It can be done
in a follow-up patch of course but
> LGTM, I just found this patch still on the list, I mostly tested with
> qemu, so I don't think that is a problem before, but I realize it's a
> problem when we run on a real board that does not support those
> extensions.
I think we can skip this one as I needed to introduce vector_hw and
> I see, you mean at the beginning of frm_after, we can just return the
> incoming mode as is?
>
> If (CALL_P (insn))
> return mode; // Given we aware the mode is DYN_CALL already.
Yes, potentially similar for all the other ifs but I didn't check
all of them.
> Thank and will cleanup this
>> Why do we appear to return a different mode here? We already request
>> FRM_MODE_DYN_CALL in mode_needed. It looks like in the whole function
>> we do not change the mode so we could just always return the incoming
>> mode?
>
> Because we need to emit 2 insn when meet a call. One before the
> I would like to propose that being focus and moving forward for this
> patch itself, the underlying other RVV floating point API support and
> the RVV instrinsic API fully tests depend on this.
Sorry, I didn't mean to ditch LCM/mode switching. I believe it is doing
a pretty good job and we
Hi Juzhe,
just some small remarks, all in all no major concerns.
> + vmv%m1r.v\t%0,%1"
> + "&& (!register_operand (operands[0], mode)
> + || !register_operand (operands[1], mode))"
> + [(const_int 0)]
> + {
> +unsigned size = GET_MODE_BITSIZE (mode).to_constant ();
> +if (size
> CSR write could be expensive, it will flush whole pipeline in some
> RISC-V core implementation…
Hopefully not flush but just sequentialize but yes, it's usually a
performance concern. However if we set the rounding mode to something
else for an intrinsic and then call a function we want to
> current llvm didn't do any pre optimization. They always
> backup+restore for each rounding mode intrinsic
I see. There is still the option of lazily restoring the
(entry) FRM before a function call but not read the FRM
after every call. Do we have any data on how good or bad the
So after thinking about it again - I'm still not really sure
I like treating every function as essentially an fesetround.
There is a reason why fesetround is special. Does LLVM behave
the same way?
But supposing we really, really want it and assuming there's consensus:
+ start_sequence ();
+
> The call fesetround could be any function in practice, and we never
> know if that function might use dynamic rounding mode floating point
> operation or not, also we don't know if it will be called fesetround
> or not.
>
> So that's why we want to restore before function call to make sure we
>
Hi Pan,
> Given we have a call, we would like to restore before call and then
> backup frm after call. Looks current mode switching cannot emit insn
> like that, it can only either emit insn before (mostly) or after
> (when NOTE_INSN_BASIC_BLOCK_P). Thus, we try to emit the one after
> call when
Hi Jin,
this looks reasonable. Would you mind adding (small) test cases
still to make sure we don't accidentally reintroduce the problem?
Regards
Robin
Hi Pan,
> + for (insn = PREV_INSN (cur_insn); insn; insn = PREV_INSN (insn))
> +{
> + if (INSN_P (insn))
> + {
> + if (CALL_P (insn))
> + mode = FRM_MODE_DYN;
> + break;
> + }
> +
> + if (insn == BB_HEAD (bb))
> + break;
> +}
> +
> + return
mode cannot hold
the range so it still has a chance to fit in the next larger one.
Bootstrap and testsuite are unchanged on x86, aarch64 and power and
I'm going to commit the attached barring further remarks.
Regards
Robin
>From cabfa07256eafec4485304fe7639d8fd7512cf11 Mon Sep 17 00:00:00 200
> LGTM, but I would like make sure Robin is OK too
Yes, LGTM as well.
Regards
Robin
> The UNORDERED enum will cause ICE since we have UNORDERED in rtx_code.
>
> Could you give me another enum name?
I would have expected it to work when it's namespaced.
Regards
Robin
> +enum reduction_type
> +{
> + UNORDERED_REDUDUCTION,
> + FOLD_LEFT_REDUDUCTION,
> + MASK_LEN_FOLD_LEFT_REDUDUCTION,
> +};
There are redundant 'DU's here ;)
Wouldn't it be sufficient to have an enum
enum reduction_type
{
UNORDERED,
FOLD_LEFT,
MASK_LEN_FOLD_LEFT,
};
?
Regards
Robin
Hi Juzhe,
I just noticed that we recently started calling things MASK_LEN
(instead of LEN_MASK before) with the reductions. Wouldn't we want
to be consistent here? Especially as the length takes precedence.
I realize the preparational work like optabs is already upstream
but still wanted to
Hi Lehua,
> I think you are rigth, I would like to remove the `-mcmodel=medany` option and
> relax assert from `__riscv_save/restore_4` to `__riscv_save/restore_(3|4)` to
> let
> this testcase not brittle on any -mcmodel. Then I'm also going to add another
> testcase (I dont known how to run
Hi Lehua,
> I think the purpose of this testcase is to check whether the modifications to
> the stack frame are as expected, so it is necessary to specify exactly whether
> three or four registers are saved. But I think its need to add another
> testcase
> which use another option
OK.
Regards
Robin
Hi Lehua,
> This patch fix testcase failed when I build RISC-V GCC with -mcmodel=medany
> as default. If set to medany, stack_save_restore.c testcase will fail because
> of
> the reduced use of s3 registers in assembly (thus calling __riscv_save/store_3
> instead of __riscv_save/store_4).
Hi Juzhe,
> +;; -
> +;; [INT,FP] Initialize from individual elements
> +;; -
> +;; Includes:
> +;; - vslide1up.vx/vfslide1up.vf
> +;;
>>> Can you add testcases? Also the current restriction is because
>>> the variants you add are not always correct and I don't see any
>>> checks that the intermediate type doesn't lose significant bits?
I didn't manage to create one for aarch64 nor for x86 because AVX512
has direct conversions
Hi Juzhe,
thanks, looks good to me now - did before already actually ;).
Regards
Robin
> Is COND _LEN FMA ok for trunk? I can commit it without changing
> scatter store testcase fix.
>
> It makes no sense block cond Len fma support. The middle end support
> has already been merged.
Then just add a TODO or so that says e.g. "For some reason we exceed
the default code model's +-2
From my understanding, we dont have RVV instruction for fmax/fmin?
>
> Unless I'm misunderstanding, we do. The ISA manual says
>
> === Vector Floating-Point MIN/MAX Instructions
>
> The vector floating-point `vfmin` and `vfmax` instructions have the
> same behavior as the
> Can you add testcases? Also the current restriction is because
> the variants you add are not always correct and I don't see any
> checks that the intermediate type doesn't lose significant bits?
The testcases I wanted to add with a follow-up RISC-V patch but
I can also try an aarch64 one.
So
Hi Juzhe,
thanks, no complaints from my side apart from one:
> +/* { dg-additional-options "-mcmodel=medany" } */
Please add a comment why we need this.
Regards
Robin
Hi,
the recent changes that allowed multi-step conversions for
"non-packing/unpacking", i.e. modifier == NONE targets included
promoting to-float and demoting to-int variants. This patch
adds demoting to-float and promoting to-int handling.
Bootstrapped and regtested on x86 and aarch64.
A
> +enum __RISCV_VXRM {
> + __RISCV_VXRM_RNU = 0,
> + __RISCV_VXRM_RNE = 1,
> + __RISCV_VXRM_RDN = 2,
> + __RISCV_VXRM_ROD = 3,
> +};
> +
> __extension__ extern __inline unsigned long
> __attribute__ ((__always_inline__, __gnu_inline__, __artificial__))
> vread_csr(enum RVV_CSR csr)
We have
Hi Juzhe,
> +/* Return true if the operation is the floating-point operation need FRM. */
> +static bool
> +need_frm_p (rtx_code code, machine_mode mode)
> +{
> + if (!FLOAT_MODE_P (mode))
> +return false;
> + return code != SMIN && code != SMAX;
> +}
Return true if the operation requires
> int32_t x = (int32_t)0x1.0p32;
> int32_t y = (int32_t)(int64_t)0x1.0p32;
>
> sets x to 2147483647 and y to 0.
>>>
>>> Hmm, good question. GENERIC has a direct truncation to unsigned char
>>> for example, the C standard generally says if the integral part cannot
>>> be
Attached is v2 that does not switch to uint64_t but stays within
32 bits by shifting the optab by 20 and the mode(s) by 10 bits.
Regards
Robin
Upcoming changes for RISC-V will have us exceed 255 modes or 8 bits.
This patch increases the limit to 10 bits and adjusts the hashing
function for the
Ok so the consensus seems to rather stay with 32 bits and only
change the shift to 10/20? As MACHINE_MODE_BITSIZE is already
16 we would need an additional check independent of that.
Wouldn't that also be a bit confusing?
Attached is a "v2" with unsigned long long changed to
uint64_t and
> if (NUM_OPTABS > 0x
> || MAX_MACHINE_MODE >= ((1 << MACHINE_MODE_BITSIZE) - 1))
> fatal ("genopinit range assumptions invalid");
>
> so it would be a case of changing those instead.
Thanks, right at the beginning of the file and I didn't see it ;)
MACHINE_MODE_BITSIZE is already
> MASK4 0, 5, 6, 7 also works definitely
Sure :) My remark was that the tests are all(?)
evenly split and a bit more variation would have been nice.
Not that it doesn't work, I'm OK with it as is.
Regards
Robin
> The compress optimization pattern has included all variety.
> It's not necessary to force split (half/half), we can apply this compress
> pattern to any variety of compress pattern.
Yes, that's clear. I meant the testcases are mostly designed
like
MASK4 1, 2, 6, 7
instead of variation like
Hi Juzhe,
looks good from my side, thanks. While going through it I
thought of some related cases that we could still handle
differently but I didn't bother to formalize them for now.
Most likely we already handle them in the shortest way
anyway. I'm going to check on that when I find some time
Hi,
upcoming changes for RISC-V will have us exceed 256 modes or 8 bits. The
helper functions in gen* rely on the opcode as well as two modes fitting
into an unsigned int (a signed int even if we consider the qsort default
comparison function). This patch changes the type of the index/hash
from
Hi Juzhe,
thanks, the somewhat unified modulo is IMHO a more readable.
Could probably still be improved but OK with me for now.
> + if (is_dummy_len)
> + {
> + rtx dummy_len = gen_reg_rtx (Pmode);
Can we call this is_vlmax_len/is_vlmax and vlmax_len or so?
> + if
Hi Juzhe,
thanks, that's quite a chunk :) and it took me a while to
go through it.
> @@ -564,7 +565,14 @@ const_vec_all_in_range_p (rtx vec, poly_int64 minval,
> poly_int64 maxval)
> static rtx
> gen_const_vector_dup (machine_mode mode, poly_int64 val)
> {
> - rtx c = gen_int_mode (val,
Hi Pan,
thanks, I think that works for me as I'm expecting these
parts to change a bit anyway in the near future.
There is no functional change to the last revision that
Kito already OK'ed so I think you can go ahead.
Regards
Robin
Hi,
Juzhe noticed that several floating-point conversion tests
FAIL on 32 bit. This is due to the autovect FP narrowing patterns
using a truncate instead of a float_truncate which results in
a combine ICE. It would try to e.g. simplify a unary operation by
simplify_const_unary_operation which
Hi,
this patch adds a gen_lowpart in the vec_extract expander so it properly
works with a variable index and adds tests.
Regards
Robin
gcc/ChangeLog:
* config/riscv/autovec.md: Add gen_lowpart.
gcc/testsuite/ChangeLog:
*
Hi,
this patch enables a variable index for vec_set and
adjusts/cleans up the tests.
Regards
Robin
gcc/ChangeLog:
* config/riscv/autovec.md: Allow register index operand.
gcc/testsuite/ChangeLog:
* gcc.target/riscv/rvv/autovec/vls-vlmax/vec_set-1.c: Adjust
test.
>> + _4 = vD.2208;
>> + _5 = .VEC_EXTRACT (_4, idx_2(D));
>> + _3 = _5; */
>
> I think you are doing
>
> _3 = .VEC_EXTRACT (_4, idx_2(D));
>
> and avoiding the SSA name copy correctly. Can you double-check?
>
> OK with the comment adjusted.
Argh, yes, thanks.
Hi Pan,
yes, the problem is fixed for me. Still some comments ;) Sorry
it took a while.
> 1. By default, the RVV floating-point will take dyn mode.
> 2. DYN is invalid in FRM register for RVV floating-point.
>
> When mode switching the function entry and exit, it will take DYN as
> the frm
> LGTM, thanks :)
just a moment please, I still wanted to reply ;)
Regards
Robin
> Kito (or somebody else), would you mind doing a RISC-V bootstrap? It would
> take forever on my machine. Thank you.
I did a bootstrap myself now and it finally finished. Going to commit the
attached tomorrow.
Regards
Robin
Subject: [PATCH] Change MODE_BITSIZE to MODE_PRECISION for
Hi Richard,
changed the patch according to your comments and I agree that
it is more readable that way. I hope using lhs as target for
the extract directly is possible the way I did it. Richard's
patch for aarch64 is already, therefore testsuites on aarch64 and
i386 are unchanged.
Regards
> Just revert this patch, it reports some weird illegal instr, I may
> need more time for this.
The illegal instruction is due to the wrong rounding mode. We set
5 instead of 7 because the two enums don't match. A simple but ugly
fix would be two dummy entries so that FRM_MODE_DYN is entry 7 in
Hi Pan,
in general this looks good to me. I would have expected the
change in the other patch I just looked at though ;) Sure
it's intrinsics this time but the same principle.
Regards
Robin
Hi Pan,
I only just now got back to my mails and I'm a bit confused about
the several patches related to rounding mode.
> 1. By default, the RVV floating-point will take dyn mode.
Here you are referring to 10.1 in the spec I assume. Could we
add this as a comment in the code?
> 2. DYN is
> Sorry for inconvenient, still working on fix it. If urgent I can
> revert this change to unblock your work ASAP.
I'm not blocked by this, thanks, just wanted to document it here.
I was testing another patch and needed to dig for a while until
I realized the FAILs come from this one. In general
Hmm, looks like it wasn't simple enough...
I'm seeing execution fails for various floating point test cases.
This is due to a mismatch between the FRM_DYN definition (0b111 == 7)
and the attribute value (== 5). Therefore we set the rounding mode
to 5 instead of 7.
Regards
Robin
LGTM.
Regards
Robin
Hi,
In gimple-isel we already deduce a vec_set pattern from an
ARRAY_REF(VIEW_CONVERT_EXPR). This patch does the same for a
vec_extract.
The code is largely similar to the vec_set one including
the addition of a can_vec_extract_var_idx_p function
in optabs.cc to check if the backend can handle
401 - 500 of 973 matches
Mail list logo