On Tue, Nov 07, 2017 at 03:05:55PM +0000, Alex Bennée wrote:
> This is similar to the approach used by the FP/simd data in so far as
> we generate a block of random data and then load into it. As there are
> no post-index SVE operations we need to emit an additional incp
> instruction to generate our offset into the array.
> 
> Signed-off-by: Alex Bennée <alex.ben...@linaro.org>
> ---
>  risugen        |  3 +++
>  risugen_arm.pm | 57 ++++++++++++++++++++++++++++++++++++++++++++++++++-------
>  2 files changed, 53 insertions(+), 7 deletions(-)
> 
> diff --git a/risugen b/risugen
> index aba4bb7..0ac8e86 100755
> --- a/risugen
> +++ b/risugen
> @@ -317,6 +317,7 @@ sub main()
>      my $condprob = 0;
>      my $fpscr = 0;
>      my $fp_enabled = 1;
> +    my $sve_enabled = 1;
>      my $big_endian = 0;
>      my ($infile, $outfile);
>  
> @@ -334,6 +335,7 @@ sub main()
>                  },
>                  "be" => sub { $big_endian = 1; },
>                  "no-fp" => sub { $fp_enabled = 0; },
> +                "sve" => sub { $sve_enabled = 1; },
>          ) or return 1;
>      # allow "--pattern re,re" and "--pattern re --pattern re"
>      @pattern_re = split(/,/,join(',',@pattern_re));
> @@ -361,6 +363,7 @@ sub main()
>          'fpscr' => $fpscr,
>          'numinsns' => $numinsns,
>          'fp_enabled' => $fp_enabled,
> +        'sve_enabled' => $sve_enabled,
>          'outfile' => $outfile,
>          'details' => \%insn_details,
>          'keys' => \@insn_keys,
> diff --git a/risugen_arm.pm b/risugen_arm.pm
> index 2f10d58..8d1e1fd 100644
> --- a/risugen_arm.pm
> +++ b/risugen_arm.pm
> @@ -472,9 +472,47 @@ sub write_random_aarch64_fpdata()
>      }
>  }
>  
> -sub write_random_aarch64_regdata($)
> +sub write_random_aarch64_svedata()
>  {
> -    my ($fp_enabled) = @_;
> +    # Load SVE registers
> +    my $align = 16;
> +    my $vl = 16;                             # number of vqs

Would this be better phrased

        my $vq = 16;                            # quadwords per vector

> +    my $datalen = (32 * $vl * 16) + $align;
> +
> +    write_pc_adr(0, (3 * 4) + ($align - 1)); # insn 1
> +    write_align_reg(0, $align);              # insn 2
> +    write_jump_fwd($datalen);                # insn 3
> +
> +    # align safety
> +    for (my $i = 0; $i < ($align / 4); $i++) {
> +        # align with nops
> +        insn32(0xd503201f);
> +    };
> +
> +    for (my $rt = 0; $rt <= 31; $rt++) {
> +        for (my $q = 0; $q < $vl; $q++) {
> +            write_random_fpreg_var(4); # quad
> +        }
> +    }
> +
> +    # Reset all the predicate registers to all true
> +    for (my $p = 0; $p < 16; $p++) {
> +        insn32(0x2518e3e0 | $p);
> +    }
> +
> +    # there is no post index load so we do this by hand
> +    write_mov_ri(1, 0);
> +    for (my $rt = 0; $rt <= 31; $rt++) {
> +        # ld1d    z0.d, p0/z, [x0, x1, lsl #3]
> +        insn32(0xa5e14000 | $rt);
> +        # incp    x1, p0.d
> +        insn32(0x25ec8801);

You could avoid this with the unpredicated form LDR (vector).
(LD1x scalar+immediate doesn't provide enough immediate range).

        # ldr   z$rt, [x0, #$rt, mul vl]
        insn32(0x85804000 + $rt + (($rt & 7) << 10) + (($rt & 0x18) << 13));

which is what the kernel does.

No harm in exercising different instructions though!  The kernel uses
embarrassingly few.


Does it matter that the stride will depend on the actual current VL?
If x0 just points to a block of random data, I guess it doesn't matter:
some trailing data remains unused, but that doesn't make the used data
any less random.

[...]

Cheers
---Dave

Reply via email to