Re: [Qemu-devel] [PATCH 1/2] target/openrisc: convert to DisasContextBase

2018-02-17 Thread Emilio G. Cota
On Sun, Feb 18, 2018 at 12:10:46 +0900, Stafford Horne wrote:
> On Sat, Feb 17, 2018 at 08:32:36PM -0500, Emilio G. Cota wrote:
> > Signed-off-by: Emilio G. Cota 
> This looks ok to me, and thanks for testing.  However, I am not so familiar 
> with
> the DisasContextBase.  Is this something new?

The work on having a generic translation loop started a while ago,
picking up steam in June'17 -- look for "Generic translation framework"
threads on the mailing list.

The goal is to have a single loop (accel/tcg/translator.c) to
translate from target code to TCG IR. Apart from reducing code
duplication, this will eventually ease things like inserting
instrumentation, which will have a single injection point
instead of having to patch all targets' translation loops.

Transitioning to the generic translation loop typically
involves three steps:
1- Use of DisasJumpType to mark the exits from the translation loop
2- Use of DisasContextBase to keep track of some state that applies
   to all targets (e.g. num_insns, program counter)
3- Conversion to TranslatorOps, which is a set of function pointers
   called from translator_loop in accel/tcg/translator.c.

You can see an example of 1-3 for Alpha in commits 3de811c, c5f8065
and 99a92b9, respectively.

Quite a few targets have already been converted (you can see which
ones with "git grep '^\s*translator_loop('"); I'm in the
process of converting the remaining ones as long as I can test
them with a boot image (I've been spamming the list with
conversion patches the last few days).

> It would be good to have a commit message to say what it is any why we are
> making the change?

I considered it, but didn't want to annoy everyone by sending the
same explanation many times (once for each converted target).
The purpose and value of this consolidation work
is well-known among people who follow TCG-related threads on
the mailing list (as I said above this work has been ongoing
for a while), so I think it's reasonable to keep the commit
message empty.

I figured some people would have to be filled in though (like
yourself), and that's why I just wrote the above; now
I can point to this message if this happens again :-)

Hope the background I gave above helps; please let me know
if anything is unclear.

Thanks,

Emilio




[Qemu-devel] [Bug 1211910] Re: Logical to linear address translation is wrong for 32-bit guests on a 64-bit hypervisor

2018-02-17 Thread Launchpad Bug Tracker
[Expired for QEMU because there has been no activity for 60 days.]

** Changed in: qemu
   Status: Incomplete => Expired

-- 
You received this bug notification because you are a member of qemu-
devel-ml, which is subscribed to QEMU.
https://bugs.launchpad.net/bugs/1211910

Title:
  Logical to linear address translation is wrong for 32-bit guests on a
  64-bit hypervisor

Status in QEMU:
  Expired

Bug description:
  I run a 64-bit hypervisor in qemu-system-x86_64 (without KVM) and on top of 
that I have a 32-bit guest. The guest configures the code-segment to have a 
base of 0x4000_ and a limit of 0x_ with paging disabled. Thus, if a 
logical address of e.g. 0xC000_ is used, it should be translated to 
0x_ (linear and physical), because of the overflow that happens.
  But this does not happen with the described setup. Instead, qemu seems to 
calculate the logical to linear translation with 64-bit addresses so that no 
overflow happens. Consequently, the resulting address is 0x1__ and this 
gets written to exitinfo2 in the VMCB structure. This causes trouble for 
hypervisors that expect the upper 32 bits of exitinfo2 to be 0 for 32-bit 
guests.

  Note also that the exact same setup runs fine on real AMD machines
  with SVM. That is, the upper 32 bits in exitinfo2 are always 0 because
  of the overflow.

  I've tested that with the latest development version of QEMU (commit
  328465fd9f3a628ab320b5959d68d3d49df58fa6).

To manage notifications about this bug go to:
https://bugs.launchpad.net/qemu/+bug/1211910/+subscriptions



[Qemu-devel] [Bug 1202289] Re: Windows 2008/7 Guest to Guest Very slow 10-20Mbit/s

2018-02-17 Thread Launchpad Bug Tracker
[Expired for QEMU because there has been no activity for 60 days.]

** Changed in: qemu
   Status: Incomplete => Expired

-- 
You received this bug notification because you are a member of qemu-
devel-ml, which is subscribed to QEMU.
https://bugs.launchpad.net/bugs/1202289

Title:
  Windows 2008/7 Guest to Guest Very slow 10-20Mbit/s

Status in QEMU:
  Expired

Bug description:
  I'm not sure if I'm submitting this to the proper place or not, if
  not, please direct me accordingly.

  At this point I'm starting to get desperate, I'll take any options or
  suggestions that spring to mind:

  Anyway, the problem exists on multiple hosts of various quality.
  From 4 core 8g mem machines to 12 core 64Gig mem machines with LVM and
  Raid-10.

  Using iperf as the testing utility: (windows guest can be either Windows 7 or 
2008R2)
  -Windows Guest -> Windows Guest averages 20Mbit/s (The problem)
  -Windows Guest -> Host averages 800Mbit/s
  -Host -> Windows Guest averages 1.1Gbit/s
  -Linux Guest -> Host averages 12GBit/s
  -Linux Guest -> Linux Guest averages 10.2Gbit/s

  For windows guests, switching between e1000 and virtio drivers doesn't
  make much of a difference.

  I use openvswitch to handle the bridging (makes bonding nics much
  easier)

  Disabling TSO GRO on all the host nics, and virtual nics, as well as modding 
the registry using:
  netsh int tcp set global (various params here)  can slightly improve Windows 
-> windows throughput.   up to maybe 100Mbit/sbut even that is spotty at 
best.

  The Particulars of the fastest host which benchmarks about the same as
  the slowest host.

  Ubuntu 12.04 64bit (updated to lastest as of  July 15th)
  Linux cckvm03 3.5.0-36-generic #57~precise1-Ubuntu SMP Thu Jun 20 18:21:09 
UTC 2013 x86_64 x86_64 x86_64 GNU/Linux

  libvirt: 
  Source: libvirt
  Version: 0.9.8-2ubuntu17.10

  qemu-kvm
  Package: qemu-kvm
  Version: 1.0+noroms-0ubuntu14.8
  Replaces: kvm (<< 1:84+dfsg-0ubuntu16+0.11.0), kvm-data, qemu

  openvswitch
  Source: openvswitch
  Version: 1.4.0-1ubuntu1.5

  /proc/cpuifo

  processor   : 0
  vendor_id   : GenuineIntel
  cpu family  : 6
  model   : 45
  model name  : Intel(R) Xeon(R) CPU E5-2440 0 @ 2.40GHz
  stepping: 7
  microcode   : 0x70d
  cpu MHz : 2400.226
  cache size  : 15360 KB
  physical id : 0
  siblings: 12
  core id : 0
  cpu cores   : 6
  apicid  : 0
  initial apicid  : 0
  fpu : yes
  fpu_exception   : yes
  cpuid level : 13
  wp  : yes
  flags   : fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca 
cmov
  pat pse36 clflush dts acpi mmx fxsr sse sse2 ss ht tm pbe syscall nx pdpe1gb 
rdt
  scp lm constant_tsc arch_perfmon pebs bts rep_good nopl xtopology nonstop_tsc 
ap
  erfmperf pni pclmulqdq dtes64 monitor ds_cpl vmx smx est tm2 ssse3 cx16 xtpr 
pdc
  m pcid dca sse4_1 sse4_2 x2apic popcnt tsc_deadline_timer aes xsave avx 
lahf_lm
  ida arat xsaveopt pln pts dtherm tpr_shadow vnmi flexpriority ept vpid
  bogomips: 4800.45
  clflush size: 64
  cache_alignment : 64
  address sizes   : 46 bits physical, 48 bits virtual
  power management:

  
  -Sample KVM line
  usr/bin/kvm -S -M pc-1.0 -enable-kvm -m 4096 -smp 
2,sockets=2,cores=1,threads=1 -name gvexch01 -uuid 
d28ffb4b-d809-3b40-ae3d-2925e6995394 -nodefconfig -nodefaults -chardev 
socket,id=charmonitor,path=/var/lib/libvirt/qemu/gvexch01.monitor,server,nowait 
-mon chardev=charmonitor,id=monitor,mode=control -rtc base=localtime 
-no-shutdown -boot order=dc,menu=on -drive 
file=/dev/vgroup/gvexch01,if=none,id=drive-virtio-disk0,format=raw,cache=none,aio=native
 -device 
virtio-blk-pci,bus=pci.0,addr=0x4,drive=drive-virtio-disk0,id=virtio-disk0 
-drive 
file=/dev/vgroup/gvexch01-d,if=none,id=drive-virtio-disk1,format=raw,cache=none 
-device 
virtio-blk-pci,bus=pci.0,addr=0x6,drive=drive-virtio-disk1,id=virtio-disk1 
-drive if=none,media=cdrom,id=drive-ide0-0-0,readonly=on,format=raw -device 
ide-drive,bus=ide.0,unit=0,drive=drive-ide0-0-0,id=ide0-0-0 -netdev 
tap,fd=18,id=hostnet0,vhost=on,vhostfd=21 -device 
virtio-net-pci,netdev=hostnet0,id=net0,mac=52:54:00:bf:4e:1c,bus=pci.0,addr=0x3 
-chardev pty,id=charserial0 -device isa-serial,chardev=charserial0,id=serial0 
-usb -device usb-tablet,id=input0 -vnc 127.0.0.1:2 -vga std -device 
virtio-balloon-pci,id=balloon0,bus=pci.0,addr=0x5

To manage notifications about this bug go to:
https://bugs.launchpad.net/qemu/+bug/1202289/+subscriptions



[Qemu-devel] [Bug 1299858] Re: qemu all apps crash on OS X 10.6.8

2018-02-17 Thread Launchpad Bug Tracker
[Expired for QEMU because there has been no activity for 60 days.]

** Changed in: qemu
   Status: Incomplete => Expired

-- 
You received this bug notification because you are a member of qemu-
devel-ml, which is subscribed to QEMU.
https://bugs.launchpad.net/bugs/1299858

Title:
  qemu all apps crash on OS X 10.6.8

Status in QEMU:
  Expired

Bug description:
  qemu-2.0.0-rc0 (and 1.7.1) crashes with SIGABORT in all apps when
  configured with --with-coroutine=sigaltstack (which is what configure
  selects by default) but all  run fine if configured with --with-
  coroutine=gthread.

  Crash is at line 253 (last line of Coroutine
  *qemu_coroutine_new(void)) in coroutine-sigaltstack.c in 2.0.0-rc0
  tarball.

  Platform is OS X 10.6.8 (Darwin Kernel Version 10.8.0), compiler gcc
  4.2.1

  Sorry for the sparse report but I'm short on time today.

To manage notifications about this bug go to:
https://bugs.launchpad.net/qemu/+bug/1299858/+subscriptions



Re: [Qemu-devel] [PATCH 2/2] target/openrisc: convert to TranslatorOps

2018-02-17 Thread Stafford Horne
On Sat, Feb 17, 2018 at 08:32:37PM -0500, Emilio G. Cota wrote:
> Notes:
> 
> - Changed the num_insns test in tb_start to check for
>   dc->base.num_insns > 1, since when tb_start is first
>   called in a TB, base.num_insns is already set to 1.
> 
> - Removed DISAS_NEXT from the switch on tb_stop; use DISAS_TOO_MANY
>   instead.
> 
> - Added an assert_not_reached on tb_stop for DISAS_NEXT and the default
>   case.
> 
> - Merged the two separate log_target_disas calls into the disas_log op.

Hello, thanks for this again.  But just wondering if you can add some background
to the commit message?  Whats the benefit?  Probably something like "Updating
translate core to use new DisasContext and other apis which are being adopted in
other targets... "

-Stafford

> Signed-off-by: Emilio G. Cota 
> ---
>  target/openrisc/translate.c | 168 
> ++--
>  1 file changed, 85 insertions(+), 83 deletions(-)
> 
> diff --git a/target/openrisc/translate.c b/target/openrisc/translate.c
> index 0450144..4af4569 100644
> --- a/target/openrisc/translate.c
> +++ b/target/openrisc/translate.c
> @@ -49,6 +49,7 @@ typedef struct DisasContext {
>  uint32_t mem_idx;
>  uint32_t tb_flags;
>  uint32_t delayed_branch;
> +uint32_t next_page_start;
>  } DisasContext;
>  
>  static TCGv cpu_sr;
> @@ -1519,46 +1520,23 @@ static void disas_openrisc_insn(DisasContext *dc, 
> OpenRISCCPU *cpu)
>  }
>  }
>  
> -void gen_intermediate_code(CPUState *cs, struct TranslationBlock *tb)
> +static int openrisc_tr_init_disas_context(DisasContextBase *dcbase,
> +  CPUState *cs, int max_insns)
>  {
> +DisasContext *dc = container_of(dcbase, DisasContext, base);
>  CPUOpenRISCState *env = cs->env_ptr;
> -OpenRISCCPU *cpu = openrisc_env_get_cpu(env);
> -struct DisasContext ctx, *dc = 
> -uint32_t pc_start;
> -uint32_t next_page_start;
> -int num_insns;
> -int max_insns;
>  
> -pc_start = tb->pc;
> -
> -dc->base.tb = tb;
> -dc->base.singlestep_enabled = cs->singlestep_enabled;
> -dc->base.pc_next = pc_start;
> -dc->base.is_jmp = DISAS_NEXT;
> -
> -dc->mem_idx = cpu_mmu_index(>env, false);
> +dc->mem_idx = cpu_mmu_index(env, false);
>  dc->tb_flags = dc->base.tb->flags;
>  dc->delayed_branch = (dc->tb_flags & TB_FLAGS_DFLAG) != 0;
> +dc->next_page_start = (dc->base.pc_first & TARGET_PAGE_MASK) +
> +TARGET_PAGE_SIZE;
> +return max_insns;
> +}
>  
> -next_page_start = (pc_start & TARGET_PAGE_MASK) + TARGET_PAGE_SIZE;
> -num_insns = 0;
> -max_insns = tb_cflags(tb) & CF_COUNT_MASK;
> -
> -if (max_insns == 0) {
> -max_insns = CF_COUNT_MASK;
> -}
> -if (max_insns > TCG_MAX_INSNS) {
> -max_insns = TCG_MAX_INSNS;
> -}
> -
> -if (qemu_loglevel_mask(CPU_LOG_TB_IN_ASM)
> -&& qemu_log_in_addr_range(pc_start)) {
> -qemu_log_lock();
> -qemu_log("\n");
> -qemu_log("IN: %s\n", lookup_symbol(pc_start));
> -}
> -
> -gen_tb_start(tb);
> +static void openrisc_tr_tb_start(DisasContextBase *db, CPUState *cs)
> +{
> +DisasContext *dc = container_of(db, DisasContext, base);
>  
>  /* Allow the TCG optimizer to see that R0 == 0,
> when it's true, which is the common case.  */
> @@ -1567,50 +1545,60 @@ void gen_intermediate_code(CPUState *cs, struct 
> TranslationBlock *tb)
>  } else {
>  cpu_R[0] = cpu_R0;
>  }
> +}
> +
> +static void openrisc_tr_insn_start(DisasContextBase *dcbase, CPUState *cs)
> +{
> +DisasContext *dc = container_of(dcbase, DisasContext, base);
>  
> -do {
> -tcg_gen_insn_start(dc->base.pc_next, (dc->delayed_branch ? 1 : 0)
> -| (num_insns ? 2 : 0));
> -num_insns++;
> +tcg_gen_insn_start(dc->base.pc_next, (dc->delayed_branch ? 1 : 0)
> +   | (dc->base.num_insns > 1 ? 2 : 0));
> +}
>  
> -if (unlikely(cpu_breakpoint_test(cs, dc->base.pc_next, BP_ANY))) {
> -tcg_gen_movi_tl(cpu_pc, dc->base.pc_next);
> -gen_exception(dc, EXCP_DEBUG);
> +static bool openrisc_tr_breakpoint_check(DisasContextBase *dcbase, CPUState 
> *cs,
> + const CPUBreakpoint *bp)
> +{
> +DisasContext *dc = container_of(dcbase, DisasContext, base);
> +
> +tcg_gen_movi_tl(cpu_pc, dc->base.pc_next);
> +gen_exception(dc, EXCP_DEBUG);
> +dc->base.is_jmp = DISAS_UPDATE;
> +/* The address covered by the breakpoint must be included in
> +   [tb->pc, tb->pc + tb->size) in order to for it to be
> +   properly cleared -- thus we increment the PC here so that
> +   the logic setting tb->size below does the right thing.  */
> +dc->base.pc_next += 4;
> +return true;
> +}
> +
> +static void openrisc_tr_translate_insn(DisasContextBase *dcbase, CPUState 
> *cs)
> +{
> +DisasContext *dc = 

Re: [Qemu-devel] [PATCH 1/2] target/openrisc: convert to DisasContextBase

2018-02-17 Thread Stafford Horne
On Sat, Feb 17, 2018 at 08:32:36PM -0500, Emilio G. Cota wrote:
> Signed-off-by: Emilio G. Cota 

Hello,

This looks ok to me, and thanks for testing.  However, I am not so familiar with
the DisasContextBase.  Is this something new?

It would be good to have a commit message to say what it is any why we are
making the change?

-Stafford

> ---
>  target/openrisc/translate.c | 87 
> ++---
>  1 file changed, 43 insertions(+), 44 deletions(-)
> 
> diff --git a/target/openrisc/translate.c b/target/openrisc/translate.c
> index 2747b24..0450144 100644
> --- a/target/openrisc/translate.c
> +++ b/target/openrisc/translate.c
> @@ -36,7 +36,8 @@
>  #include "exec/log.h"
>  
>  #define LOG_DIS(str, ...) \
> -qemu_log_mask(CPU_LOG_TB_IN_ASM, "%08x: " str, dc->pc, ## __VA_ARGS__)
> +qemu_log_mask(CPU_LOG_TB_IN_ASM, "%08x: " str, dc->base.pc_next,\
> +  ## __VA_ARGS__)
>  
>  /* is_jmp field values */
>  #define DISAS_JUMPDISAS_TARGET_0 /* only pc was modified dynamically */
> @@ -44,13 +45,10 @@
>  #define DISAS_TB_JUMP DISAS_TARGET_2 /* only pc was modified statically */
>  
>  typedef struct DisasContext {
> -TranslationBlock *tb;
> -target_ulong pc;
> -uint32_t is_jmp;
> +DisasContextBase base;
>  uint32_t mem_idx;
>  uint32_t tb_flags;
>  uint32_t delayed_branch;
> -bool singlestep_enabled;
>  } DisasContext;
>  
>  static TCGv cpu_sr;
> @@ -126,9 +124,9 @@ static void gen_exception(DisasContext *dc, unsigned int 
> excp)
>  
>  static void gen_illegal_exception(DisasContext *dc)
>  {
> -tcg_gen_movi_tl(cpu_pc, dc->pc);
> +tcg_gen_movi_tl(cpu_pc, dc->base.pc_next);
>  gen_exception(dc, EXCP_ILLEGAL);
> -dc->is_jmp = DISAS_UPDATE;
> +dc->base.is_jmp = DISAS_UPDATE;
>  }
>  
>  /* not used yet, open it when we need or64.  */
> @@ -166,12 +164,12 @@ static void check_ov64s(DisasContext *dc)
>  
>  static inline bool use_goto_tb(DisasContext *dc, target_ulong dest)
>  {
> -if (unlikely(dc->singlestep_enabled)) {
> +if (unlikely(dc->base.singlestep_enabled)) {
>  return false;
>  }
>  
>  #ifndef CONFIG_USER_ONLY
> -return (dc->tb->pc & TARGET_PAGE_MASK) == (dest & TARGET_PAGE_MASK);
> +return (dc->base.tb->pc & TARGET_PAGE_MASK) == (dest & TARGET_PAGE_MASK);
>  #else
>  return true;
>  #endif
> @@ -182,10 +180,10 @@ static void gen_goto_tb(DisasContext *dc, int n, 
> target_ulong dest)
>  if (use_goto_tb(dc, dest)) {
>  tcg_gen_movi_tl(cpu_pc, dest);
>  tcg_gen_goto_tb(n);
> -tcg_gen_exit_tb((uintptr_t)dc->tb + n);
> +tcg_gen_exit_tb((uintptr_t)dc->base.tb + n);
>  } else {
>  tcg_gen_movi_tl(cpu_pc, dest);
> -if (dc->singlestep_enabled) {
> +if (dc->base.singlestep_enabled) {
>  gen_exception(dc, EXCP_DEBUG);
>  }
>  tcg_gen_exit_tb(0);
> @@ -194,16 +192,16 @@ static void gen_goto_tb(DisasContext *dc, int n, 
> target_ulong dest)
>  
>  static void gen_jump(DisasContext *dc, int32_t n26, uint32_t reg, uint32_t 
> op0)
>  {
> -target_ulong tmp_pc = dc->pc + n26 * 4;
> +target_ulong tmp_pc = dc->base.pc_next + n26 * 4;
>  
>  switch (op0) {
>  case 0x00: /* l.j */
>  tcg_gen_movi_tl(jmp_pc, tmp_pc);
>  break;
>  case 0x01: /* l.jal */
> -tcg_gen_movi_tl(cpu_R[9], dc->pc + 8);
> +tcg_gen_movi_tl(cpu_R[9], dc->base.pc_next + 8);
>  /* Optimize jal being used to load the PC for PIC.  */
> -if (tmp_pc == dc->pc + 8) {
> +if (tmp_pc == dc->base.pc_next + 8) {
>  return;
>  }
>  tcg_gen_movi_tl(jmp_pc, tmp_pc);
> @@ -211,7 +209,7 @@ static void gen_jump(DisasContext *dc, int32_t n26, 
> uint32_t reg, uint32_t op0)
>  case 0x03: /* l.bnf */
>  case 0x04: /* l.bf  */
>  {
> -TCGv t_next = tcg_const_tl(dc->pc + 8);
> +TCGv t_next = tcg_const_tl(dc->base.pc_next + 8);
>  TCGv t_true = tcg_const_tl(tmp_pc);
>  TCGv t_zero = tcg_const_tl(0);
>  
> @@ -227,7 +225,7 @@ static void gen_jump(DisasContext *dc, int32_t n26, 
> uint32_t reg, uint32_t op0)
>  tcg_gen_mov_tl(jmp_pc, cpu_R[reg]);
>  break;
>  case 0x12: /* l.jalr */
> -tcg_gen_movi_tl(cpu_R[9], (dc->pc + 8));
> +tcg_gen_movi_tl(cpu_R[9], (dc->base.pc_next + 8));
>  tcg_gen_mov_tl(jmp_pc, cpu_R[reg]);
>  break;
>  default:
> @@ -795,7 +793,7 @@ static void dec_misc(DisasContext *dc, uint32_t insn)
>  return;
>  }
>  gen_helper_rfe(cpu_env);
> -dc->is_jmp = DISAS_UPDATE;
> +dc->base.is_jmp = DISAS_UPDATE;
>  #endif
>  }
>  break;
> @@ -1254,14 +1252,14 @@ static void dec_sys(DisasContext *dc, uint32_t insn)
>  switch (op0) {
>  case 0x000:/* l.sys */
>  LOG_DIS("l.sys %d\n", 

[Qemu-devel] [PATCH 2/2] target/openrisc: convert to TranslatorOps

2018-02-17 Thread Emilio G. Cota
Notes:

- Changed the num_insns test in tb_start to check for
  dc->base.num_insns > 1, since when tb_start is first
  called in a TB, base.num_insns is already set to 1.

- Removed DISAS_NEXT from the switch on tb_stop; use DISAS_TOO_MANY
  instead.

- Added an assert_not_reached on tb_stop for DISAS_NEXT and the default
  case.

- Merged the two separate log_target_disas calls into the disas_log op.

Signed-off-by: Emilio G. Cota 
---
 target/openrisc/translate.c | 168 ++--
 1 file changed, 85 insertions(+), 83 deletions(-)

diff --git a/target/openrisc/translate.c b/target/openrisc/translate.c
index 0450144..4af4569 100644
--- a/target/openrisc/translate.c
+++ b/target/openrisc/translate.c
@@ -49,6 +49,7 @@ typedef struct DisasContext {
 uint32_t mem_idx;
 uint32_t tb_flags;
 uint32_t delayed_branch;
+uint32_t next_page_start;
 } DisasContext;
 
 static TCGv cpu_sr;
@@ -1519,46 +1520,23 @@ static void disas_openrisc_insn(DisasContext *dc, 
OpenRISCCPU *cpu)
 }
 }
 
-void gen_intermediate_code(CPUState *cs, struct TranslationBlock *tb)
+static int openrisc_tr_init_disas_context(DisasContextBase *dcbase,
+  CPUState *cs, int max_insns)
 {
+DisasContext *dc = container_of(dcbase, DisasContext, base);
 CPUOpenRISCState *env = cs->env_ptr;
-OpenRISCCPU *cpu = openrisc_env_get_cpu(env);
-struct DisasContext ctx, *dc = 
-uint32_t pc_start;
-uint32_t next_page_start;
-int num_insns;
-int max_insns;
 
-pc_start = tb->pc;
-
-dc->base.tb = tb;
-dc->base.singlestep_enabled = cs->singlestep_enabled;
-dc->base.pc_next = pc_start;
-dc->base.is_jmp = DISAS_NEXT;
-
-dc->mem_idx = cpu_mmu_index(>env, false);
+dc->mem_idx = cpu_mmu_index(env, false);
 dc->tb_flags = dc->base.tb->flags;
 dc->delayed_branch = (dc->tb_flags & TB_FLAGS_DFLAG) != 0;
+dc->next_page_start = (dc->base.pc_first & TARGET_PAGE_MASK) +
+TARGET_PAGE_SIZE;
+return max_insns;
+}
 
-next_page_start = (pc_start & TARGET_PAGE_MASK) + TARGET_PAGE_SIZE;
-num_insns = 0;
-max_insns = tb_cflags(tb) & CF_COUNT_MASK;
-
-if (max_insns == 0) {
-max_insns = CF_COUNT_MASK;
-}
-if (max_insns > TCG_MAX_INSNS) {
-max_insns = TCG_MAX_INSNS;
-}
-
-if (qemu_loglevel_mask(CPU_LOG_TB_IN_ASM)
-&& qemu_log_in_addr_range(pc_start)) {
-qemu_log_lock();
-qemu_log("\n");
-qemu_log("IN: %s\n", lookup_symbol(pc_start));
-}
-
-gen_tb_start(tb);
+static void openrisc_tr_tb_start(DisasContextBase *db, CPUState *cs)
+{
+DisasContext *dc = container_of(db, DisasContext, base);
 
 /* Allow the TCG optimizer to see that R0 == 0,
when it's true, which is the common case.  */
@@ -1567,50 +1545,60 @@ void gen_intermediate_code(CPUState *cs, struct 
TranslationBlock *tb)
 } else {
 cpu_R[0] = cpu_R0;
 }
+}
+
+static void openrisc_tr_insn_start(DisasContextBase *dcbase, CPUState *cs)
+{
+DisasContext *dc = container_of(dcbase, DisasContext, base);
 
-do {
-tcg_gen_insn_start(dc->base.pc_next, (dc->delayed_branch ? 1 : 0)
-  | (num_insns ? 2 : 0));
-num_insns++;
+tcg_gen_insn_start(dc->base.pc_next, (dc->delayed_branch ? 1 : 0)
+   | (dc->base.num_insns > 1 ? 2 : 0));
+}
 
-if (unlikely(cpu_breakpoint_test(cs, dc->base.pc_next, BP_ANY))) {
-tcg_gen_movi_tl(cpu_pc, dc->base.pc_next);
-gen_exception(dc, EXCP_DEBUG);
+static bool openrisc_tr_breakpoint_check(DisasContextBase *dcbase, CPUState 
*cs,
+ const CPUBreakpoint *bp)
+{
+DisasContext *dc = container_of(dcbase, DisasContext, base);
+
+tcg_gen_movi_tl(cpu_pc, dc->base.pc_next);
+gen_exception(dc, EXCP_DEBUG);
+dc->base.is_jmp = DISAS_UPDATE;
+/* The address covered by the breakpoint must be included in
+   [tb->pc, tb->pc + tb->size) in order to for it to be
+   properly cleared -- thus we increment the PC here so that
+   the logic setting tb->size below does the right thing.  */
+dc->base.pc_next += 4;
+return true;
+}
+
+static void openrisc_tr_translate_insn(DisasContextBase *dcbase, CPUState *cs)
+{
+DisasContext *dc = container_of(dcbase, DisasContext, base);
+OpenRISCCPU *cpu = OPENRISC_CPU(cs);
+
+disas_openrisc_insn(dc, cpu);
+dc->base.pc_next += 4;
+
+/* delay slot */
+if (dc->delayed_branch) {
+dc->delayed_branch--;
+if (!dc->delayed_branch) {
+tcg_gen_mov_tl(cpu_pc, jmp_pc);
+tcg_gen_discard_tl(jmp_pc);
 dc->base.is_jmp = DISAS_UPDATE;
-/* The address covered by the breakpoint must be included in
-   [tb->pc, tb->pc + tb->size) in order to for it to be
-   properly cleared -- thus we increment the PC here so 

[Qemu-devel] [PATCH 0/2] target/openrisc: translator loop conversion

2018-02-17 Thread Emilio G. Cota
Tested on the image linked from the wiki:
  https://wiki.qemu.org/Testing/System_Images
Boot after decompressing with:
  or1k-softmmu/qemu-system-or1k -cpu or1200 -M or1k-sim \
-kernel path/to/or1k-linux-4.10 \
-serial stdio -nographic -monitor none

Thanks,

Emilio




[Qemu-devel] [PATCH 1/2] target/openrisc: convert to DisasContextBase

2018-02-17 Thread Emilio G. Cota
Signed-off-by: Emilio G. Cota 
---
 target/openrisc/translate.c | 87 ++---
 1 file changed, 43 insertions(+), 44 deletions(-)

diff --git a/target/openrisc/translate.c b/target/openrisc/translate.c
index 2747b24..0450144 100644
--- a/target/openrisc/translate.c
+++ b/target/openrisc/translate.c
@@ -36,7 +36,8 @@
 #include "exec/log.h"
 
 #define LOG_DIS(str, ...) \
-qemu_log_mask(CPU_LOG_TB_IN_ASM, "%08x: " str, dc->pc, ## __VA_ARGS__)
+qemu_log_mask(CPU_LOG_TB_IN_ASM, "%08x: " str, dc->base.pc_next,\
+  ## __VA_ARGS__)
 
 /* is_jmp field values */
 #define DISAS_JUMPDISAS_TARGET_0 /* only pc was modified dynamically */
@@ -44,13 +45,10 @@
 #define DISAS_TB_JUMP DISAS_TARGET_2 /* only pc was modified statically */
 
 typedef struct DisasContext {
-TranslationBlock *tb;
-target_ulong pc;
-uint32_t is_jmp;
+DisasContextBase base;
 uint32_t mem_idx;
 uint32_t tb_flags;
 uint32_t delayed_branch;
-bool singlestep_enabled;
 } DisasContext;
 
 static TCGv cpu_sr;
@@ -126,9 +124,9 @@ static void gen_exception(DisasContext *dc, unsigned int 
excp)
 
 static void gen_illegal_exception(DisasContext *dc)
 {
-tcg_gen_movi_tl(cpu_pc, dc->pc);
+tcg_gen_movi_tl(cpu_pc, dc->base.pc_next);
 gen_exception(dc, EXCP_ILLEGAL);
-dc->is_jmp = DISAS_UPDATE;
+dc->base.is_jmp = DISAS_UPDATE;
 }
 
 /* not used yet, open it when we need or64.  */
@@ -166,12 +164,12 @@ static void check_ov64s(DisasContext *dc)
 
 static inline bool use_goto_tb(DisasContext *dc, target_ulong dest)
 {
-if (unlikely(dc->singlestep_enabled)) {
+if (unlikely(dc->base.singlestep_enabled)) {
 return false;
 }
 
 #ifndef CONFIG_USER_ONLY
-return (dc->tb->pc & TARGET_PAGE_MASK) == (dest & TARGET_PAGE_MASK);
+return (dc->base.tb->pc & TARGET_PAGE_MASK) == (dest & TARGET_PAGE_MASK);
 #else
 return true;
 #endif
@@ -182,10 +180,10 @@ static void gen_goto_tb(DisasContext *dc, int n, 
target_ulong dest)
 if (use_goto_tb(dc, dest)) {
 tcg_gen_movi_tl(cpu_pc, dest);
 tcg_gen_goto_tb(n);
-tcg_gen_exit_tb((uintptr_t)dc->tb + n);
+tcg_gen_exit_tb((uintptr_t)dc->base.tb + n);
 } else {
 tcg_gen_movi_tl(cpu_pc, dest);
-if (dc->singlestep_enabled) {
+if (dc->base.singlestep_enabled) {
 gen_exception(dc, EXCP_DEBUG);
 }
 tcg_gen_exit_tb(0);
@@ -194,16 +192,16 @@ static void gen_goto_tb(DisasContext *dc, int n, 
target_ulong dest)
 
 static void gen_jump(DisasContext *dc, int32_t n26, uint32_t reg, uint32_t op0)
 {
-target_ulong tmp_pc = dc->pc + n26 * 4;
+target_ulong tmp_pc = dc->base.pc_next + n26 * 4;
 
 switch (op0) {
 case 0x00: /* l.j */
 tcg_gen_movi_tl(jmp_pc, tmp_pc);
 break;
 case 0x01: /* l.jal */
-tcg_gen_movi_tl(cpu_R[9], dc->pc + 8);
+tcg_gen_movi_tl(cpu_R[9], dc->base.pc_next + 8);
 /* Optimize jal being used to load the PC for PIC.  */
-if (tmp_pc == dc->pc + 8) {
+if (tmp_pc == dc->base.pc_next + 8) {
 return;
 }
 tcg_gen_movi_tl(jmp_pc, tmp_pc);
@@ -211,7 +209,7 @@ static void gen_jump(DisasContext *dc, int32_t n26, 
uint32_t reg, uint32_t op0)
 case 0x03: /* l.bnf */
 case 0x04: /* l.bf  */
 {
-TCGv t_next = tcg_const_tl(dc->pc + 8);
+TCGv t_next = tcg_const_tl(dc->base.pc_next + 8);
 TCGv t_true = tcg_const_tl(tmp_pc);
 TCGv t_zero = tcg_const_tl(0);
 
@@ -227,7 +225,7 @@ static void gen_jump(DisasContext *dc, int32_t n26, 
uint32_t reg, uint32_t op0)
 tcg_gen_mov_tl(jmp_pc, cpu_R[reg]);
 break;
 case 0x12: /* l.jalr */
-tcg_gen_movi_tl(cpu_R[9], (dc->pc + 8));
+tcg_gen_movi_tl(cpu_R[9], (dc->base.pc_next + 8));
 tcg_gen_mov_tl(jmp_pc, cpu_R[reg]);
 break;
 default:
@@ -795,7 +793,7 @@ static void dec_misc(DisasContext *dc, uint32_t insn)
 return;
 }
 gen_helper_rfe(cpu_env);
-dc->is_jmp = DISAS_UPDATE;
+dc->base.is_jmp = DISAS_UPDATE;
 #endif
 }
 break;
@@ -1254,14 +1252,14 @@ static void dec_sys(DisasContext *dc, uint32_t insn)
 switch (op0) {
 case 0x000:/* l.sys */
 LOG_DIS("l.sys %d\n", K16);
-tcg_gen_movi_tl(cpu_pc, dc->pc);
+tcg_gen_movi_tl(cpu_pc, dc->base.pc_next);
 gen_exception(dc, EXCP_SYSCALL);
-dc->is_jmp = DISAS_UPDATE;
+dc->base.is_jmp = DISAS_UPDATE;
 break;
 
 case 0x100:/* l.trap */
 LOG_DIS("l.trap %d\n", K16);
-tcg_gen_movi_tl(cpu_pc, dc->pc);
+tcg_gen_movi_tl(cpu_pc, dc->base.pc_next);
 gen_exception(dc, EXCP_TRAP);
 break;
 
@@ -1479,7 +1477,7 @@ static void disas_openrisc_insn(DisasContext *dc, 
OpenRISCCPU *cpu)
 {
 uint32_t op0;
 

[Qemu-devel] [PATCH] target/m68k: TCGv returned by gen_load() must be freed

2018-02-17 Thread Laurent Vivier
Signed-off-by: Laurent Vivier 
---
 target/m68k/translate.c | 11 +++
 1 file changed, 11 insertions(+)

diff --git a/target/m68k/translate.c b/target/m68k/translate.c
index 70c7583621..cb795ed25b 100644
--- a/target/m68k/translate.c
+++ b/target/m68k/translate.c
@@ -2869,6 +2869,7 @@ DISAS_INSN(unlk)
 tcg_gen_mov_i32(reg, tmp);
 tcg_gen_addi_i32(QREG_SP, src, 4);
 tcg_temp_free(src);
+tcg_temp_free(tmp);
 }
 
 #if defined(CONFIG_SOFTMMU)
@@ -3146,6 +3147,9 @@ DISAS_INSN(subx_mem)
 gen_subx(s, src, dest, opsize);
 
 gen_store(s, opsize, addr_dest, QREG_CC_N, IS_USER(s));
+
+tcg_temp_free(dest);
+tcg_temp_free(src);
 }
 
 DISAS_INSN(mov3q)
@@ -3352,6 +3356,9 @@ DISAS_INSN(addx_mem)
 gen_addx(s, src, dest, opsize);
 
 gen_store(s, opsize, addr_dest, QREG_CC_N, IS_USER(s));
+
+tcg_temp_free(dest);
+tcg_temp_free(src);
 }
 
 static inline void shift_im(DisasContext *s, uint16_t insn, int opsize)
@@ -4396,6 +4403,8 @@ DISAS_INSN(chk2)
 gen_flush_flags(s);
 gen_helper_chk2(cpu_env, reg, bound1, bound2);
 tcg_temp_free(reg);
+tcg_temp_free(bound1);
+tcg_temp_free(bound2);
 }
 
 static void m68k_copy_line(TCGv dst, TCGv src, int index)
@@ -4545,6 +4554,7 @@ DISAS_INSN(moves)
 } else {
 gen_partset_reg(opsize, reg, tmp);
 }
+tcg_temp_free(tmp);
 }
 switch (extract32(insn, 3, 3)) {
 case 3: /* Indirect postincrement.  */
@@ -5535,6 +5545,7 @@ DISAS_INSN(mac)
 case 4: /* Pre-decrement.  */
 tcg_gen_mov_i32(AREG(insn, 0), addr);
 }
+tcg_temp_free(loadval);
 }
 }
 
-- 
2.14.3




Re: [Qemu-devel] [Qemu-ppc] [PATCH v2 3/3] ppc: Add aCube Sam460ex board

2018-02-17 Thread BALATON Zoltan

On Fri, 16 Feb 2018, BALATON Zoltan wrote:

On Fri, 16 Feb 2018, David Gibson wrote:

On Thu, Feb 15, 2018 at 10:27:06PM +0100, BALATON Zoltan wrote:

Add emulation of aCube Sam460ex board based on AMCC 460EX embedded SoC.
This is not a complete implementation yet with a lot of components
still missing but enough for the U-Boot firmware to start and to boot
a Linux kernel or AROS.

Signed-off-by: François Revol 
Signed-off-by: BALATON Zoltan 
---

v2:
- Rebased to latest changes on master
- Replaced printfs with error_report


This has a conflict in hw/ppc/Makefile.objs.  Looks like it was based
on some other patch that added ppc440_pcix.o.  That's not there
upstream.


That's patch 2/3 of this series. Have you missed that?


I've sent a v3 for this patch (3/3) now:

http://lists.nongnu.org/archive/html/qemu-devel/2018-02/msg04774.html

which includes the dts and dtb as well (I'll send a separate patch for the 
firmware after we agree on how to best do that). The missing 2/3 of the v2 
series is still valid and needed before this new patch:


http://lists.nongnu.org/archive/html/qemu-devel/2018-02/msg04259.html

The v3 is only replacing 3/3 of the previous series. Hope this is not too 
confusing.


Regards,
BALATON Zoltan


[Qemu-devel] [PATCH v3] ppc: Add aCube Sam460ex board

2018-02-17 Thread BALATON Zoltan
Add emulation of aCube Sam460ex board based on AMCC 460EX embedded SoC.
This is not a complete implementation yet with a lot of components
still missing but enough for the U-Boot firmware to start and to boot
a Linux kernel or AROS.

Signed-off-by: François Revol 
Signed-off-by: BALATON Zoltan 
---
v3:
- Added device tree source and blob
- Fixed clock frequency in device tree

v2:
- Rebased to latest changes on master
- Replaced printfs with error_report

 Makefile   |   2 +-
 default-configs/ppc-softmmu.mak|   2 +
 default-configs/ppcemb-softmmu.mak |   1 +
 hw/ppc/Makefile.objs   |   3 +-
 hw/ppc/sam460ex.c  | 603 +
 pc-bios/canyonlands.dtb| Bin 0 -> 9779 bytes
 pc-bios/canyonlands.dts| 566 ++
 7 files changed, 1175 insertions(+), 2 deletions(-)
 create mode 100644 hw/ppc/sam460ex.c
 create mode 100644 pc-bios/canyonlands.dtb
 create mode 100644 pc-bios/canyonlands.dts

diff --git a/Makefile b/Makefile
index 90e05ac..6434d6c 100644
--- a/Makefile
+++ b/Makefile
@@ -656,7 +656,7 @@ efi-e1000.rom efi-eepro100.rom efi-ne2k_pci.rom \
 efi-pcnet.rom efi-rtl8139.rom efi-virtio.rom \
 efi-e1000e.rom efi-vmxnet3.rom \
 qemu-icon.bmp qemu_logo_no_text.svg \
-bamboo.dtb petalogix-s3adsp1800.dtb petalogix-ml605.dtb \
+bamboo.dtb canyonlands.dtb petalogix-s3adsp1800.dtb petalogix-ml605.dtb \
 multiboot.bin linuxboot.bin linuxboot_dma.bin kvmvapic.bin \
 s390-ccw.img s390-netboot.img \
 spapr-rtas.bin slof.bin skiboot.lid \
diff --git a/default-configs/ppc-softmmu.mak b/default-configs/ppc-softmmu.mak
index 76e29cf..4d7be45 100644
--- a/default-configs/ppc-softmmu.mak
+++ b/default-configs/ppc-softmmu.mak
@@ -21,6 +21,8 @@ CONFIG_E500=y
 CONFIG_OPENPIC_KVM=$(call land,$(CONFIG_E500),$(CONFIG_KVM))
 CONFIG_PLATFORM_BUS=y
 CONFIG_ETSEC=y
+# For Sam460ex
+CONFIG_USB_EHCI_SYSBUS=y
 CONFIG_SM501=y
 CONFIG_IDE_SII3112=y
 CONFIG_I2C=y
diff --git a/default-configs/ppcemb-softmmu.mak 
b/default-configs/ppcemb-softmmu.mak
index bc5e1b3..67d18b2 100644
--- a/default-configs/ppcemb-softmmu.mak
+++ b/default-configs/ppcemb-softmmu.mak
@@ -15,6 +15,7 @@ CONFIG_PTIMER=y
 CONFIG_I8259=y
 CONFIG_XILINX=y
 CONFIG_XILINX_ETHLITE=y
+CONFIG_USB_EHCI_SYSBUS=y
 CONFIG_SM501=y
 CONFIG_IDE_SII3112=y
 CONFIG_I2C=y
diff --git a/hw/ppc/Makefile.objs b/hw/ppc/Makefile.objs
index bddc742..86d82a6 100644
--- a/hw/ppc/Makefile.objs
+++ b/hw/ppc/Makefile.objs
@@ -13,7 +13,8 @@ endif
 obj-$(CONFIG_PSERIES) += spapr_rtas_ddw.o
 # PowerPC 4xx boards
 obj-y += ppc4xx_devs.o ppc405_uc.o
-obj-$(CONFIG_PPC4XX) += ppc4xx_pci.o ppc405_boards.o ppc440_bamboo.o 
ppc440_pcix.o
+obj-$(CONFIG_PPC4XX) += ppc4xx_pci.o ppc405_boards.o
+obj-$(CONFIG_PPC4XX) += ppc440_bamboo.o ppc440_pcix.o ppc440_uc.o sam460ex.o
 # PReP
 obj-$(CONFIG_PREP) += prep.o
 obj-$(CONFIG_PREP) += prep_systemio.o
diff --git a/hw/ppc/sam460ex.c b/hw/ppc/sam460ex.c
new file mode 100644
index 000..70b8e76
--- /dev/null
+++ b/hw/ppc/sam460ex.c
@@ -0,0 +1,603 @@
+/*
+ * QEMU aCube Sam460ex board emulation
+ *
+ * Copyright (c) 2012 François Revol
+ * Copyright (c) 2016-2018 BALATON Zoltan
+ *
+ * This file is derived from hw/ppc440_bamboo.c,
+ * the copyright for that material belongs to the original owners.
+ *
+ * This work is licensed under the GNU GPL license version 2 or later.
+ *
+ */
+
+#include "qemu/osdep.h"
+#include "qemu-common.h"
+#include "qemu/cutils.h"
+#include "qemu/error-report.h"
+#include "qapi/error.h"
+#include "hw/hw.h"
+#include "sysemu/blockdev.h"
+#include "hw/boards.h"
+#include "sysemu/kvm.h"
+#include "kvm_ppc.h"
+#include "sysemu/device_tree.h"
+#include "sysemu/block-backend.h"
+#include "hw/loader.h"
+#include "elf.h"
+#include "exec/address-spaces.h"
+#include "exec/memory.h"
+#include "hw/ppc/ppc440.h"
+#include "hw/ppc/ppc405.h"
+#include "hw/block/flash.h"
+#include "sysemu/sysemu.h"
+#include "sysemu/qtest.h"
+#include "hw/sysbus.h"
+#include "hw/char/serial.h"
+#include "hw/i2c/ppc4xx_i2c.h"
+#include "hw/i2c/smbus.h"
+#include "hw/usb/hcd-ehci.h"
+
+#define BINARY_DEVICE_TREE_FILE "canyonlands.dtb"
+#define UBOOT_FILENAME "u-boot-sam460-20100605.bin"
+/* to extract the official U-Boot bin from the updater: */
+/* dd bs=1 skip=$(($(stat -c '%s' updater/updater-460) - 0x8)) \
+ if=updater/updater-460 of=u-boot-sam460-20100605.bin */
+
+/* from Sam460 U-Boot include/configs/Sam460ex.h */
+#define FLASH_BASE 0xfff0
+#define FLASH_BASE_H   0x4
+#define FLASH_SIZE (1 << 20)
+#define UBOOT_LOAD_BASE0xfff8
+#define UBOOT_SIZE 0x0008
+#define UBOOT_ENTRY0xfffc
+
+/* from U-Boot */
+#define EPAPR_MAGIC   (0x45504150)
+#define KERNEL_ADDR   0x100
+#define FDT_ADDR  0x180
+#define RAMDISK_ADDR  0x190
+
+/* Sam460ex IRQ MAP:
+   IRQ0  = ETH_INT
+   

Re: [Qemu-devel] [PATCH 0/1] slirp: Add domainname option to slirp's DHCP server

2018-02-17 Thread Samuel Thibault
Hello,

Benjamin Drung, on ven. 16 févr. 2018 13:55:03 +0100, wrote:
> Or should the command line option be simpler, but how should it be specified
> then? Maybe
> 
>   -net 
> staticroute=10.0.2.0/24via10.0.2.2,staticroute=192.168.0.0/16via10.0.2.2

I guess 

>   -net staticroute=10.0.2.0/24:10.0.2.2,staticroute=192.168.0.0/16:10.0.2.2

would be more mainstream.

I'm also wondering to which extent we want to extend our dhcp server,
when a tap device can be used to plug (or proxy) an actual dhcp serveR.

samuel



[Qemu-devel] [PATCH 12/19] target/hppa: Convert direct and indirect branches

2018-02-17 Thread Richard Henderson
Signed-off-by: Richard Henderson 
---
 target/hppa/translate.c  | 125 ---
 target/hppa/insns.decode |  34 -
 2 files changed, 63 insertions(+), 96 deletions(-)

diff --git a/target/hppa/translate.c b/target/hppa/translate.c
index e01a28c70c..5df5b8dba4 100644
--- a/target/hppa/translate.c
+++ b/target/hppa/translate.c
@@ -901,15 +901,6 @@ static target_sreg assemble_16a(uint32_t insn)
 return x << 2;
 }
 
-static target_sreg assemble_17(uint32_t insn)
-{
-target_ureg x = -(target_ureg)(insn & 1);
-x = (x <<  5) | extract32(insn, 16, 5);
-x = (x <<  1) | extract32(insn, 2, 1);
-x = (x << 10) | extract32(insn, 3, 10);
-return x << 2;
-}
-
 static target_sreg assemble_21(uint32_t insn)
 {
 target_ureg x = -(target_ureg)(insn & 1);
@@ -920,15 +911,6 @@ static target_sreg assemble_21(uint32_t insn)
 return x << 11;
 }
 
-static target_sreg assemble_22(uint32_t insn)
-{
-target_ureg x = -(target_ureg)(insn & 1);
-x = (x << 10) | extract32(insn, 16, 10);
-x = (x <<  1) | extract32(insn, 2, 1);
-x = (x << 10) | extract32(insn, 3, 10);
-return x << 2;
-}
-
 /* The parisc documentation describes only the general interpretation of
the conditions, without describing their exact implementation.  The
interpretations do not stand up well when considering ADD,C and SUB,B.
@@ -3549,11 +3531,8 @@ static void trans_depwi_sar(DisasContext *ctx, 
arg_depwi_sar *a, uint32_t insn)
 tcg_temp_free(i);
 }
 
-static void trans_be(DisasContext *ctx, uint32_t insn, bool is_l)
+static void trans_be(DisasContext *ctx, arg_be *a, uint32_t insn)
 {
-unsigned n = extract32(insn, 1, 1);
-unsigned b = extract32(insn, 21, 5);
-target_sreg disp = assemble_17(insn);
 TCGv_reg tmp;
 
 #ifdef CONFIG_USER_ONLY
@@ -3565,29 +3544,28 @@ static void trans_be(DisasContext *ctx, uint32_t insn, 
bool is_l)
 /* Since we don't implement spaces, just branch.  Do notice the special
case of "be disp(*,r0)" using a direct branch to disp, so that we can
goto_tb to the TB containing the syscall.  */
-if (b == 0) {
-return do_dbranch(ctx, disp, is_l ? 31 : 0, n);
+if (a->b == 0) {
+return do_dbranch(ctx, a->disp, a->l, a->n);
 }
 #else
-int sp = assemble_sr3(insn);
 nullify_over(ctx);
 #endif
 
 tmp = get_temp(ctx);
-tcg_gen_addi_reg(tmp, load_gpr(ctx, b), disp);
+tcg_gen_addi_reg(tmp, load_gpr(ctx, a->b), a->disp);
 tmp = do_ibranch_priv(ctx, tmp);
 
 #ifdef CONFIG_USER_ONLY
-do_ibranch(ctx, tmp, is_l ? 31 : 0, n);
+do_ibranch(ctx, tmp, a->l, a->n);
 #else
 TCGv_i64 new_spc = tcg_temp_new_i64();
 
-load_spr(ctx, new_spc, sp);
-if (is_l) {
+load_spr(ctx, new_spc, a->sp);
+if (a->l) {
 copy_iaoq_entry(cpu_gr[31], ctx->iaoq_n, ctx->iaoq_n_var);
 tcg_gen_mov_i64(cpu_sr[0], cpu_iasq_f);
 }
-if (n && use_nullify_skip(ctx)) {
+if (a->n && use_nullify_skip(ctx)) {
 tcg_gen_mov_reg(cpu_iaoq_f, tmp);
 tcg_gen_addi_reg(cpu_iaoq_b, cpu_iaoq_f, 4);
 tcg_gen_mov_i64(cpu_iasq_f, new_spc);
@@ -3599,7 +3577,7 @@ static void trans_be(DisasContext *ctx, uint32_t insn, 
bool is_l)
 }
 tcg_gen_mov_reg(cpu_iaoq_b, tmp);
 tcg_gen_mov_i64(cpu_iasq_b, new_spc);
-nullify_set(ctx, n);
+nullify_set(ctx, a->n);
 }
 tcg_temp_free_i64(new_spc);
 tcg_gen_lookup_and_goto_ptr();
@@ -3608,21 +3586,14 @@ static void trans_be(DisasContext *ctx, uint32_t insn, 
bool is_l)
 #endif
 }
 
-static void trans_bl(DisasContext *ctx, uint32_t insn, const DisasInsn *di)
+static void trans_bl(DisasContext *ctx, arg_bl *a, uint32_t insn)
 {
-unsigned n = extract32(insn, 1, 1);
-unsigned link = extract32(insn, 21, 5);
-target_sreg disp = assemble_17(insn);
-
-do_dbranch(ctx, iaoq_dest(ctx, disp), link, n);
+do_dbranch(ctx, iaoq_dest(ctx, a->disp), a->l, a->n);
 }
 
-static void trans_b_gate(DisasContext *ctx, uint32_t insn, const DisasInsn *di)
+static void trans_b_gate(DisasContext *ctx, arg_b_gate *a, uint32_t insn)
 {
-unsigned n = extract32(insn, 1, 1);
-unsigned link = extract32(insn, 21, 5);
-target_sreg disp = assemble_17(insn);
-target_ureg dest = iaoq_dest(ctx, disp);
+target_ureg dest = iaoq_dest(ctx, a->disp);
 
 /* Make sure the caller hasn't done something weird with the queue.
  * ??? This is not quite the same as the PSW[B] bit, which would be
@@ -3661,61 +3632,44 @@ static void trans_b_gate(DisasContext *ctx, uint32_t 
insn, const DisasInsn *di)
 }
 #endif
 
-do_dbranch(ctx, dest, link, n);
+do_dbranch(ctx, dest, a->l, a->n);
 }
 
-static void trans_bl_long(DisasContext *ctx, uint32_t insn, const DisasInsn 
*di)
+static void trans_blr(DisasContext *ctx, arg_blr *a, uint32_t insn)
 {
-unsigned n = extract32(insn, 1, 1);
-target_sreg disp = assemble_22(insn);
-
-

[Qemu-devel] [PATCH 11/19] target/hppa: Convert shift, extract, deposit insns

2018-02-17 Thread Richard Henderson
Signed-off-by: Richard Henderson 
---
 target/hppa/translate.c  | 217 ++-
 target/hppa/insns.decode |  15 
 2 files changed, 96 insertions(+), 136 deletions(-)

diff --git a/target/hppa/translate.c b/target/hppa/translate.c
index 361a20b733..e01a28c70c 100644
--- a/target/hppa/translate.c
+++ b/target/hppa/translate.c
@@ -3293,26 +3293,21 @@ static void trans_movbi(DisasContext *ctx, arg_movbi 
*a, uint32_t insn)
 do_cbranch(ctx, a->disp, a->n, );
 }
 
-static void trans_shrpw_sar(DisasContext *ctx, uint32_t insn,
-const DisasInsn *di)
+static void trans_shrpw_sar(DisasContext *ctx, arg_shrpw_sar *a, uint32_t insn)
 {
-unsigned rt = extract32(insn, 0, 5);
-unsigned c = extract32(insn, 13, 3);
-unsigned r1 = extract32(insn, 16, 5);
-unsigned r2 = extract32(insn, 21, 5);
 TCGv_reg dest;
 
-if (c) {
+if (a->c) {
 nullify_over(ctx);
 }
 
-dest = dest_gpr(ctx, rt);
-if (r1 == 0) {
-tcg_gen_ext32u_reg(dest, load_gpr(ctx, r2));
+dest = dest_gpr(ctx, a->t);
+if (a->r1 == 0) {
+tcg_gen_ext32u_reg(dest, load_gpr(ctx, a->r2));
 tcg_gen_shr_reg(dest, dest, cpu_sar);
-} else if (r1 == r2) {
+} else if (a->r1 == a->r2) {
 TCGv_i32 t32 = tcg_temp_new_i32();
-tcg_gen_trunc_reg_i32(t32, load_gpr(ctx, r2));
+tcg_gen_trunc_reg_i32(t32, load_gpr(ctx, a->r2));
 tcg_gen_rotr_i32(t32, t32, cpu_sar);
 tcg_gen_extu_i32_reg(dest, t32);
 tcg_temp_free_i32(t32);
@@ -3320,7 +3315,7 @@ static void trans_shrpw_sar(DisasContext *ctx, uint32_t 
insn,
 TCGv_i64 t = tcg_temp_new_i64();
 TCGv_i64 s = tcg_temp_new_i64();
 
-tcg_gen_concat_reg_i64(t, load_gpr(ctx, r2), load_gpr(ctx, r1));
+tcg_gen_concat_reg_i64(t, load_gpr(ctx, a->r2), load_gpr(ctx, a->r1));
 tcg_gen_extu_reg_i64(s, cpu_sar);
 tcg_gen_shr_i64(t, t, s);
 tcg_gen_trunc_i64_reg(dest, t);
@@ -3328,79 +3323,67 @@ static void trans_shrpw_sar(DisasContext *ctx, uint32_t 
insn,
 tcg_temp_free_i64(t);
 tcg_temp_free_i64(s);
 }
-save_gpr(ctx, rt, dest);
+save_gpr(ctx, a->t, dest);
 
 /* Install the new nullification.  */
 cond_free(>null_cond);
-if (c) {
-ctx->null_cond = do_sed_cond(c, dest);
+if (a->c) {
+ctx->null_cond = do_sed_cond(a->c, dest);
 }
 nullify_end(ctx);
 }
 
-static void trans_shrpw_imm(DisasContext *ctx, uint32_t insn,
-const DisasInsn *di)
+static void trans_shrpw_imm(DisasContext *ctx, arg_shrpw_imm *a, uint32_t insn)
 {
-unsigned rt = extract32(insn, 0, 5);
-unsigned cpos = extract32(insn, 5, 5);
-unsigned c = extract32(insn, 13, 3);
-unsigned r1 = extract32(insn, 16, 5);
-unsigned r2 = extract32(insn, 21, 5);
-unsigned sa = 31 - cpos;
+unsigned sa = 31 - a->cpos;
 TCGv_reg dest, t2;
 
-if (c) {
+if (a->c) {
 nullify_over(ctx);
 }
 
-dest = dest_gpr(ctx, rt);
-t2 = load_gpr(ctx, r2);
-if (r1 == r2) {
+dest = dest_gpr(ctx, a->t);
+t2 = load_gpr(ctx, a->r2);
+if (a->r1 == a->r2) {
 TCGv_i32 t32 = tcg_temp_new_i32();
 tcg_gen_trunc_reg_i32(t32, t2);
 tcg_gen_rotri_i32(t32, t32, sa);
 tcg_gen_extu_i32_reg(dest, t32);
 tcg_temp_free_i32(t32);
-} else if (r1 == 0) {
+} else if (a->r1 == 0) {
 tcg_gen_extract_reg(dest, t2, sa, 32 - sa);
 } else {
 TCGv_reg t0 = tcg_temp_new();
 tcg_gen_extract_reg(t0, t2, sa, 32 - sa);
-tcg_gen_deposit_reg(dest, t0, cpu_gr[r1], 32 - sa, sa);
+tcg_gen_deposit_reg(dest, t0, cpu_gr[a->r1], 32 - sa, sa);
 tcg_temp_free(t0);
 }
-save_gpr(ctx, rt, dest);
+save_gpr(ctx, a->t, dest);
 
 /* Install the new nullification.  */
 cond_free(>null_cond);
-if (c) {
-ctx->null_cond = do_sed_cond(c, dest);
+if (a->c) {
+ctx->null_cond = do_sed_cond(a->c, dest);
 }
 nullify_end(ctx);
 }
 
-static void trans_extrw_sar(DisasContext *ctx, uint32_t insn,
-const DisasInsn *di)
+static void trans_extrw_sar(DisasContext *ctx, arg_extrw_sar *a, uint32_t insn)
 {
-unsigned clen = extract32(insn, 0, 5);
-unsigned is_se = extract32(insn, 10, 1);
-unsigned c = extract32(insn, 13, 3);
-unsigned rt = extract32(insn, 16, 5);
-unsigned rr = extract32(insn, 21, 5);
-unsigned len = 32 - clen;
+unsigned len = 32 - a->clen;
 TCGv_reg dest, src, tmp;
 
-if (c) {
+if (a->c) {
 nullify_over(ctx);
 }
 
-dest = dest_gpr(ctx, rt);
-src = load_gpr(ctx, rr);
+dest = dest_gpr(ctx, a->t);
+src = load_gpr(ctx, a->r);
 tmp = tcg_temp_new();
 
 /* Recall that SAR is using big-endian bit numbering.  */
 tcg_gen_xori_reg(tmp, cpu_sar, TARGET_REGISTER_BITS - 1);
-if 

[Qemu-devel] [PATCH 17/19] target/hppa: Convert fp fused multiply-add insns

2018-02-17 Thread Richard Henderson
Signed-off-by: Richard Henderson 
---
 target/hppa/translate.c  | 79 
 target/hppa/insns.decode | 12 
 2 files changed, 38 insertions(+), 53 deletions(-)

diff --git a/target/hppa/translate.c b/target/hppa/translate.c
index 5abe4cd610..1d2134ac06 100644
--- a/target/hppa/translate.c
+++ b/target/hppa/translate.c
@@ -882,14 +882,6 @@ static unsigned assemble_rb64(uint32_t insn)
 return r1 * 32 + r0;
 }
 
-static unsigned assemble_rc64(uint32_t insn)
-{
-unsigned r2 = extract32(insn, 8, 1);
-unsigned r1 = extract32(insn, 13, 3);
-unsigned r0 = extract32(insn, 9, 2);
-return r2 * 32 + r1 * 4 + r0;
-}
-
 static inline unsigned assemble_sr3(uint32_t insn)
 {
 unsigned s2 = extract32(insn, 13, 1);
@@ -4033,67 +4025,52 @@ static void trans_fmpysub_d(DisasContext *ctx, 
arg_mpyadd *a, uint32_t insn)
 do_fmpyadd_d(ctx, a, true);
 }
 
-static void trans_fmpyfadd_s(DisasContext *ctx, uint32_t insn,
- const DisasInsn *di)
+static void trans_fmpyfadd_f(DisasContext *ctx, arg_fmpyfadd_f *a,
+ uint32_t insn)
 {
-unsigned rt = assemble_rt64(insn);
-unsigned neg = extract32(insn, 5, 1);
-unsigned rm1 = assemble_ra64(insn);
-unsigned rm2 = assemble_rb64(insn);
-unsigned ra3 = assemble_rc64(insn);
-TCGv_i32 a, b, c;
+TCGv_i32 x, y, z;
 
 nullify_over(ctx);
-a = load_frw0_i32(rm1);
-b = load_frw0_i32(rm2);
-c = load_frw0_i32(ra3);
+x = load_frw0_i32(a->rm1);
+y = load_frw0_i32(a->rm2);
+z = load_frw0_i32(a->ra3);
 
-if (neg) {
-gen_helper_fmpynfadd_s(a, cpu_env, a, b, c);
+if (a->neg) {
+gen_helper_fmpynfadd_s(x, cpu_env, x, y, z);
 } else {
-gen_helper_fmpyfadd_s(a, cpu_env, a, b, c);
+gen_helper_fmpyfadd_s(x, cpu_env, x, y, z);
 }
 
-tcg_temp_free_i32(b);
-tcg_temp_free_i32(c);
-save_frw_i32(rt, a);
-tcg_temp_free_i32(a);
+tcg_temp_free_i32(y);
+tcg_temp_free_i32(z);
+save_frw_i32(a->t, x);
+tcg_temp_free_i32(x);
 nullify_end(ctx);
 }
 
-static void trans_fmpyfadd_d(DisasContext *ctx, uint32_t insn,
- const DisasInsn *di)
+static void trans_fmpyfadd_d(DisasContext *ctx, arg_fmpyfadd_d *a,
+ uint32_t insn)
 {
-unsigned rt = extract32(insn, 0, 5);
-unsigned neg = extract32(insn, 5, 1);
-unsigned rm1 = extract32(insn, 21, 5);
-unsigned rm2 = extract32(insn, 16, 5);
-unsigned ra3 = assemble_rc64(insn);
-TCGv_i64 a, b, c;
+TCGv_i64 x, y, z;
 
 nullify_over(ctx);
-a = load_frd0(rm1);
-b = load_frd0(rm2);
-c = load_frd0(ra3);
+x = load_frd0(a->rm1);
+y = load_frd0(a->rm2);
+z = load_frd0(a->ra3);
 
-if (neg) {
-gen_helper_fmpynfadd_d(a, cpu_env, a, b, c);
+if (a->neg) {
+gen_helper_fmpynfadd_d(x, cpu_env, x, y, z);
 } else {
-gen_helper_fmpyfadd_d(a, cpu_env, a, b, c);
+gen_helper_fmpyfadd_d(x, cpu_env, x, y, z);
 }
 
-tcg_temp_free_i64(b);
-tcg_temp_free_i64(c);
-save_frd(rt, a);
-tcg_temp_free_i64(a);
+tcg_temp_free_i64(y);
+tcg_temp_free_i64(z);
+save_frd(a->t, x);
+tcg_temp_free_i64(x);
 nullify_end(ctx);
 }
 
-static const DisasInsn table_fp_fused[] = {
-{ 0xb800u, 0xfc000800u, trans_fmpyfadd_s },
-{ 0xb8000800u, 0xfc0019c0u, trans_fmpyfadd_d }
-};
-
 static void translate_table_int(DisasContext *ctx, uint32_t insn,
 const DisasInsn table[], size_t n)
 {
@@ -4129,10 +4106,6 @@ static void translate_one(DisasContext *ctx, uint32_t 
insn)
 case 0x0E:
 translate_table(ctx, insn, table_float_0e);
 return;
-
-case 0x2E:
-translate_table(ctx, insn, table_fp_fused);
-return;
 }
 gen_illegal(ctx);
 }
diff --git a/target/hppa/insns.decode b/target/hppa/insns.decode
index ddbbaefd83..83612c562e 100644
--- a/target/hppa/insns.decode
+++ b/target/hppa/insns.decode
@@ -39,6 +39,10 @@
 
 %rm64  1:1 16:5
 %rt64  6:1 0:5
+%ra64  7:1 21:5
+%rb64  12:1 16:5
+%rc64  8:1 13:3 9:2
+%rc32  13:3 9:2
 
 %im5_0 0:s1 1:4
 %im5_1616:s1 17:4
@@ -336,3 +340,11 @@ blr111010 l:5   x:5   010 000 n:1 0
 bv 111010 b:5   x:5   110 000 n:1 0
 bve111010 b:5   0 110 100 n:1 -l=0
 bve111010 b:5   0 111 100 n:1 -l=2
+
+
+# FP Fused Multiple-Add
+
+
+fmpyfadd_f 101110 . . ... . 0 ... . . neg:1 . \
+   rm1=%ra64 rm2=%rb64 ra3=%rc64 t=%rt64
+fmpyfadd_d 101110 rm1:5 rm2:5 ... 0 1 ..0 0 0 neg:1 t:5ra3=%rc32
-- 
2.14.3




[Qemu-devel] [PATCH 18/19] target/hppa: Convert fp operate insns

2018-02-17 Thread Richard Henderson
Signed-off-by: Richard Henderson 
---
 target/hppa/translate.c  | 757 ---
 target/hppa/insns.decode | 175 +++
 2 files changed, 498 insertions(+), 434 deletions(-)

diff --git a/target/hppa/translate.c b/target/hppa/translate.c
index 1d2134ac06..305a81778b 100644
--- a/target/hppa/translate.c
+++ b/target/hppa/translate.c
@@ -360,21 +360,6 @@ static int expand_shl11(int val)
to recognize unmasked interrupts.  */
 #define DISAS_IAQ_N_STALE_EXIT  DISAS_TARGET_2
 
-typedef struct DisasInsn {
-uint32_t insn, mask;
-void (*trans)(DisasContext *ctx, uint32_t insn,
-  const struct DisasInsn *f);
-union {
-void (*ttt)(TCGv_reg, TCGv_reg, TCGv_reg);
-void (*weww)(TCGv_i32, TCGv_env, TCGv_i32, TCGv_i32);
-void (*dedd)(TCGv_i64, TCGv_env, TCGv_i64, TCGv_i64);
-void (*wew)(TCGv_i32, TCGv_env, TCGv_i32);
-void (*ded)(TCGv_i64, TCGv_env, TCGv_i64);
-void (*wed)(TCGv_i32, TCGv_env, TCGv_i64);
-void (*dew)(TCGv_i64, TCGv_env, TCGv_i32);
-} f;
-} DisasInsn;
-
 /* global register indexes */
 static TCGv_reg cpu_gr[32];
 static TCGv_i64 cpu_sr[4];
@@ -861,34 +846,6 @@ static void gen_goto_tb(DisasContext *ctx, int which,
 }
 }
 
-static unsigned assemble_rt64(uint32_t insn)
-{
-unsigned r1 = extract32(insn, 6, 1);
-unsigned r0 = extract32(insn, 0, 5);
-return r1 * 32 + r0;
-}
-
-static unsigned assemble_ra64(uint32_t insn)
-{
-unsigned r1 = extract32(insn, 7, 1);
-unsigned r0 = extract32(insn, 21, 5);
-return r1 * 32 + r0;
-}
-
-static unsigned assemble_rb64(uint32_t insn)
-{
-unsigned r1 = extract32(insn, 12, 1);
-unsigned r0 = extract32(insn, 16, 5);
-return r1 * 32 + r0;
-}
-
-static inline unsigned assemble_sr3(uint32_t insn)
-{
-unsigned s2 = extract32(insn, 13, 1);
-unsigned s0 = extract32(insn, 14, 2);
-return s2 * 4 + s0;
-}
-
 /* The parisc documentation describes only the general interpretation of
the conditions, without describing their exact implementation.  The
interpretations do not stand up well when considering ADD,C and SUB,B.
@@ -3522,140 +3479,262 @@ static void trans_bve(DisasContext *ctx, arg_bve *a, 
uint32_t insn)
 #endif
 }
 
-static void trans_fop_wew_0c(DisasContext *ctx, uint32_t insn,
- const DisasInsn *di)
-{
-unsigned rt = extract32(insn, 0, 5);
-unsigned ra = extract32(insn, 21, 5);
-do_fop_wew(ctx, rt, ra, di->f.wew);
-}
+/*
+ * Float class 0
+ */
 
-static void trans_fop_wew_0e(DisasContext *ctx, uint32_t insn,
- const DisasInsn *di)
-{
-unsigned rt = assemble_rt64(insn);
-unsigned ra = assemble_ra64(insn);
-do_fop_wew(ctx, rt, ra, di->f.wew);
-}
-
-static void trans_fop_ded(DisasContext *ctx, uint32_t insn,
-  const DisasInsn *di)
-{
-unsigned rt = extract32(insn, 0, 5);
-unsigned ra = extract32(insn, 21, 5);
-do_fop_ded(ctx, rt, ra, di->f.ded);
-}
-
-static void trans_fop_wed_0c(DisasContext *ctx, uint32_t insn,
- const DisasInsn *di)
-{
-unsigned rt = extract32(insn, 0, 5);
-unsigned ra = extract32(insn, 21, 5);
-do_fop_wed(ctx, rt, ra, di->f.wed);
-}
-
-static void trans_fop_wed_0e(DisasContext *ctx, uint32_t insn,
- const DisasInsn *di)
-{
-unsigned rt = assemble_rt64(insn);
-unsigned ra = extract32(insn, 21, 5);
-do_fop_wed(ctx, rt, ra, di->f.wed);
-}
-
-static void trans_fop_dew_0c(DisasContext *ctx, uint32_t insn,
- const DisasInsn *di)
-{
-unsigned rt = extract32(insn, 0, 5);
-unsigned ra = extract32(insn, 21, 5);
-do_fop_dew(ctx, rt, ra, di->f.dew);
-}
-
-static void trans_fop_dew_0e(DisasContext *ctx, uint32_t insn,
- const DisasInsn *di)
-{
-unsigned rt = extract32(insn, 0, 5);
-unsigned ra = assemble_ra64(insn);
-do_fop_dew(ctx, rt, ra, di->f.dew);
-}
-
-static void trans_fop_weww_0c(DisasContext *ctx, uint32_t insn,
-  const DisasInsn *di)
-{
-unsigned rt = extract32(insn, 0, 5);
-unsigned rb = extract32(insn, 16, 5);
-unsigned ra = extract32(insn, 21, 5);
-do_fop_weww(ctx, rt, ra, rb, di->f.weww);
-}
-
-static void trans_fop_weww_0e(DisasContext *ctx, uint32_t insn,
-  const DisasInsn *di)
-{
-unsigned rt = assemble_rt64(insn);
-unsigned rb = assemble_rb64(insn);
-unsigned ra = assemble_ra64(insn);
-do_fop_weww(ctx, rt, ra, rb, di->f.weww);
-}
-
-static void trans_fop_dedd(DisasContext *ctx, uint32_t insn,
-   const DisasInsn *di)
-{
-unsigned rt = extract32(insn, 0, 5);
-unsigned rb = extract32(insn, 16, 5);
-unsigned ra = extract32(insn, 21, 5);
-do_fop_dedd(ctx, rt, ra, rb, di->f.dedd);
-}
-
-static void gen_fcpy_s(TCGv_i32 dst, TCGv_env 

[Qemu-devel] [PATCH 19/19] target/hppa: Merge translate_one into hppa_tr_translate_insn

2018-02-17 Thread Richard Henderson
Now that the implementation is entirely within the generated
decode function, eliminate the wrapper.

Signed-off-by: Richard Henderson 
---
 target/hppa/translate.c | 11 +++
 1 file changed, 3 insertions(+), 8 deletions(-)

diff --git a/target/hppa/translate.c b/target/hppa/translate.c
index 305a81778b..877e4dc2b7 100644
--- a/target/hppa/translate.c
+++ b/target/hppa/translate.c
@@ -3992,13 +3992,6 @@ static void trans_fmpyfadd_d(DisasContext *ctx, 
arg_fmpyfadd_d *a,
 nullify_end(ctx);
 }
 
-static void translate_one(DisasContext *ctx, uint32_t insn)
-{
-if (!decode(ctx, insn)) {
-gen_illegal(ctx);
-}
-}
-
 static int hppa_tr_init_disas_context(DisasContextBase *dcbase,
   CPUState *cs, int max_insns)
 {
@@ -4107,7 +4100,9 @@ static void hppa_tr_translate_insn(DisasContextBase 
*dcbase, CPUState *cs)
 ret = DISAS_NEXT;
 } else {
 ctx->insn = insn;
-translate_one(ctx, insn);
+if (!decode(ctx, insn)) {
+gen_illegal(ctx);
+}
 ret = ctx->base.is_jmp;
 assert(ctx->null_lab == NULL);
 }
-- 
2.14.3




[Qemu-devel] [PATCH 10/19] target/hppa: Convert conditional branches

2018-02-17 Thread Richard Henderson
Signed-off-by: Richard Henderson 
---
 target/hppa/translate.c  | 188 ---
 target/hppa/insns.decode |  30 
 2 files changed, 110 insertions(+), 108 deletions(-)

diff --git a/target/hppa/translate.c b/target/hppa/translate.c
index 1cfdbf6296..361a20b733 100644
--- a/target/hppa/translate.c
+++ b/target/hppa/translate.c
@@ -315,6 +315,13 @@ static int ma_to_m(int val)
 return val & 2 ? (val & 1 ? -1 : 1) : 0;
 }
 
+/* Used for branch targets.  */
+static int expand_shl2(int val)
+{
+return val << 2;
+}
+
+
 /* Include the auto-generated decoder.  */
 #include "decode.inc.c"
 
@@ -876,14 +883,6 @@ static inline unsigned assemble_sr3(uint32_t insn)
 return s2 * 4 + s0;
 }
 
-static target_sreg assemble_12(uint32_t insn)
-{
-target_ureg x = -(target_ureg)(insn & 1);
-x = (x <<  1) | extract32(insn, 2, 1);
-x = (x << 10) | extract32(insn, 3, 10);
-return x;
-}
-
 static target_sreg assemble_16(uint32_t insn)
 {
 /* Take the name from PA2.0, which produces a 16-bit number
@@ -3156,24 +3155,12 @@ static void trans_copr_dw(DisasContext *ctx, uint32_t 
insn)
 }
 }
 
-static void trans_cmpb(DisasContext *ctx, uint32_t insn,
-   bool is_true, bool is_imm, bool is_dw)
+static void do_cmpb(DisasContext *ctx, unsigned r, TCGv_reg in1,
+unsigned c, unsigned f, unsigned n, int disp)
 {
-target_sreg disp = assemble_12(insn) * 4;
-unsigned n = extract32(insn, 1, 1);
-unsigned c = extract32(insn, 13, 3);
-unsigned r = extract32(insn, 21, 5);
-unsigned cf = c * 2 + !is_true;
-TCGv_reg dest, in1, in2, sv;
+TCGv_reg dest, in2, sv;
 DisasCond cond;
 
-nullify_over(ctx);
-
-if (is_imm) {
-in1 = load_const(ctx, low_sextract(insn, 16, 5));
-} else {
-in1 = load_gpr(ctx, extract32(insn, 16, 5));
-}
 in2 = load_gpr(ctx, r);
 dest = get_temp(ctx);
 
@@ -3184,28 +3171,28 @@ static void trans_cmpb(DisasContext *ctx, uint32_t insn,
 sv = do_sub_sv(ctx, dest, in1, in2);
 }
 
-cond = do_sub_cond(cf, dest, in1, in2, sv);
+cond = do_sub_cond(c * 2 + f, dest, in1, in2, sv);
 do_cbranch(ctx, disp, n, );
 }
 
-static void trans_addb(DisasContext *ctx, uint32_t insn,
-   bool is_true, bool is_imm)
+static void trans_cmpb(DisasContext *ctx, arg_cmpb *a, uint32_t insn)
 {
-target_sreg disp = assemble_12(insn) * 4;
-unsigned n = extract32(insn, 1, 1);
-unsigned c = extract32(insn, 13, 3);
-unsigned r = extract32(insn, 21, 5);
-unsigned cf = c * 2 + !is_true;
-TCGv_reg dest, in1, in2, sv, cb_msb;
+nullify_over(ctx);
+do_cmpb(ctx, a->r2, load_gpr(ctx, a->r1), a->c, a->f, a->n, a->disp);
+}
+
+static void trans_cmpbi(DisasContext *ctx, arg_cmpbi *a, uint32_t insn)
+{
+nullify_over(ctx);
+do_cmpb(ctx, a->r, load_const(ctx, a->i), a->c, a->f, a->n, a->disp);
+}
+
+static void do_addb(DisasContext *ctx, unsigned r, TCGv_reg in1,
+unsigned c, unsigned f, unsigned n, int disp)
+{
+TCGv_reg dest, in2, sv, cb_msb;
 DisasCond cond;
 
-nullify_over(ctx);
-
-if (is_imm) {
-in1 = load_const(ctx, low_sextract(insn, 16, 5));
-} else {
-in1 = load_gpr(ctx, extract32(insn, 16, 5));
-}
 in2 = load_gpr(ctx, r);
 dest = dest_gpr(ctx, r);
 sv = NULL;
@@ -3226,59 +3213,84 @@ static void trans_addb(DisasContext *ctx, uint32_t insn,
 break;
 }
 
-cond = do_cond(cf, dest, cb_msb, sv);
+cond = do_cond(c * 2 + f, dest, cb_msb, sv);
 do_cbranch(ctx, disp, n, );
 }
 
-static void trans_bb(DisasContext *ctx, uint32_t insn)
+static void trans_addb(DisasContext *ctx, arg_addb *a, uint32_t insn)
+{
+nullify_over(ctx);
+do_addb(ctx, a->r2, load_gpr(ctx, a->r1), a->c, a->f, a->n, a->disp);
+}
+
+static void trans_addbi(DisasContext *ctx, arg_addbi *a, uint32_t insn)
+{
+nullify_over(ctx);
+do_addb(ctx, a->r, load_const(ctx, a->i), a->c, a->f, a->n, a->disp);
+}
+
+static void trans_bb_sar(DisasContext *ctx, arg_bb_sar *a, uint32_t insn)
 {
-target_sreg disp = assemble_12(insn) * 4;
-unsigned n = extract32(insn, 1, 1);
-unsigned c = extract32(insn, 15, 1);
-unsigned r = extract32(insn, 16, 5);
-unsigned p = extract32(insn, 21, 5);
-unsigned i = extract32(insn, 26, 1);
 TCGv_reg tmp, tcg_r;
 DisasCond cond;
 
 nullify_over(ctx);
 
 tmp = tcg_temp_new();
-tcg_r = load_gpr(ctx, r);
-if (i) {
-tcg_gen_shli_reg(tmp, tcg_r, p);
-} else {
-tcg_gen_shl_reg(tmp, tcg_r, cpu_sar);
-}
+tcg_r = load_gpr(ctx, a->r);
+tcg_gen_shl_reg(tmp, tcg_r, cpu_sar);
 
-cond = cond_make_0(c ? TCG_COND_GE : TCG_COND_LT, tmp);
+cond = cond_make_0(a->c ? TCG_COND_GE : TCG_COND_LT, tmp);
 tcg_temp_free(tmp);
-do_cbranch(ctx, disp, n, );
+do_cbranch(ctx, a->disp, a->n, );
 }
 
-static void 

[Qemu-devel] [PATCH 09/19] target/hppa: Convert fp multiply-add

2018-02-17 Thread Richard Henderson
Signed-off-by: Richard Henderson 
---
 target/hppa/translate.c  | 69 
 target/hppa/insns.decode | 12 +
 2 files changed, 52 insertions(+), 29 deletions(-)

diff --git a/target/hppa/translate.c b/target/hppa/translate.c
index 792e838849..1cfdbf6296 100644
--- a/target/hppa/translate.c
+++ b/target/hppa/translate.c
@@ -4234,37 +4234,54 @@ static inline int fmpyadd_s_reg(unsigned r)
 return (r & 16) * 2 + 16 + (r & 15);
 }
 
-static void trans_fmpyadd(DisasContext *ctx, uint32_t insn, bool is_sub)
+static void do_fmpyadd_s(DisasContext *ctx, arg_mpyadd *a, bool is_sub)
 {
-unsigned tm = extract32(insn, 0, 5);
-unsigned f = extract32(insn, 5, 1);
-unsigned ra = extract32(insn, 6, 5);
-unsigned ta = extract32(insn, 11, 5);
-unsigned rm2 = extract32(insn, 16, 5);
-unsigned rm1 = extract32(insn, 21, 5);
+int tm = fmpyadd_s_reg(a->tm);
+int ra = fmpyadd_s_reg(a->ra);
+int ta = fmpyadd_s_reg(a->ta);
+int rm2 = fmpyadd_s_reg(a->rm2);
+int rm1 = fmpyadd_s_reg(a->rm1);
 
 nullify_over(ctx);
 
-/* Independent multiply & add/sub, with undefined behaviour
-   if outputs overlap inputs.  */
-if (f == 0) {
-tm = fmpyadd_s_reg(tm);
-ra = fmpyadd_s_reg(ra);
-ta = fmpyadd_s_reg(ta);
-rm2 = fmpyadd_s_reg(rm2);
-rm1 = fmpyadd_s_reg(rm1);
-do_fop_weww(ctx, tm, rm1, rm2, gen_helper_fmpy_s);
-do_fop_weww(ctx, ta, ta, ra,
-is_sub ? gen_helper_fsub_s : gen_helper_fadd_s);
-} else {
-do_fop_dedd(ctx, tm, rm1, rm2, gen_helper_fmpy_d);
-do_fop_dedd(ctx, ta, ta, ra,
-is_sub ? gen_helper_fsub_d : gen_helper_fadd_d);
-}
+do_fop_weww(ctx, tm, rm1, rm2, gen_helper_fmpy_s);
+do_fop_weww(ctx, ta, ta, ra,
+is_sub ? gen_helper_fsub_s : gen_helper_fadd_s);
 
 nullify_end(ctx);
 }
 
+static void trans_fmpyadd_f(DisasContext *ctx, arg_mpyadd *a, uint32_t insn)
+{
+do_fmpyadd_s(ctx, a, false);
+}
+
+static void trans_fmpysub_f(DisasContext *ctx, arg_mpyadd *a, uint32_t insn)
+{
+do_fmpyadd_s(ctx, a, true);
+}
+
+static void do_fmpyadd_d(DisasContext *ctx, arg_mpyadd *a, bool is_sub)
+{
+nullify_over(ctx);
+
+do_fop_dedd(ctx, a->tm, a->rm1, a->rm2, gen_helper_fmpy_d);
+do_fop_dedd(ctx, a->ta, a->ta, a->ra,
+is_sub ? gen_helper_fsub_d : gen_helper_fadd_d);
+
+nullify_end(ctx);
+}
+
+static void trans_fmpyadd_d(DisasContext *ctx, arg_mpyadd *a, uint32_t insn)
+{
+do_fmpyadd_d(ctx, a, false);
+}
+
+static void trans_fmpysub_d(DisasContext *ctx, arg_mpyadd *a, uint32_t insn)
+{
+do_fmpyadd_d(ctx, a, true);
+}
+
 static void trans_fmpyfadd_s(DisasContext *ctx, uint32_t insn,
  const DisasInsn *di)
 {
@@ -4355,9 +4372,6 @@ static void translate_one(DisasContext *ctx, uint32_t 
insn)
 
 opc = extract32(insn, 26, 6);
 switch (opc) {
-case 0x06:
-trans_fmpyadd(ctx, insn, false);
-return;
 case 0x08:
 trans_ldil(ctx, insn);
 return;
@@ -4435,9 +4449,6 @@ static void translate_one(DisasContext *ctx, uint32_t 
insn)
 case 0x25:
 trans_subi(ctx, insn);
 return;
-case 0x26:
-trans_fmpyadd(ctx, insn, true);
-return;
 case 0x27:
 trans_cmpb(ctx, insn, true, false, true);
 return;
diff --git a/target/hppa/insns.decode b/target/hppa/insns.decode
index 212d12a9c9..5393d30f43 100644
--- a/target/hppa/insns.decode
+++ b/target/hppa/insns.decode
@@ -151,3 +151,15 @@ lda11 . . .. . 1 -- 0110  
..   @ldim5 size=2
 lda11 . . .. . 0 -- 0110  ..   @ldstx size=2
 sta11 . . .. . 1 -- 1110  ..   @stim5 size=2
 stby   11 b:5 r:5 sp:2 a:1 1 -- 1100 m:1   .   disp=%im5_0
+
+
+# Floating-point Multiply Add
+
+
+rm1 rm2 ta ra tm
+@mpyadd.. rm1:5 rm2:5 ta:5 ra:5 . tm:5
+
+fmpyadd_f  000110 . . . . 0 .  @mpyadd
+fmpyadd_d  000110 . . . . 1 .  @mpyadd
+fmpysub_f  100110 . . . . 0 .  @mpyadd
+fmpysub_d  100110 . . . . 1 .  @mpyadd
-- 
2.14.3




[Qemu-devel] [PATCH 16/19] target/hppa: Convert halt/reset insns

2018-02-17 Thread Richard Henderson
Signed-off-by: Richard Henderson 
---
 target/hppa/translate.c  | 49 +++-
 target/hppa/insns.decode |  5 +
 2 files changed, 20 insertions(+), 34 deletions(-)

diff --git a/target/hppa/translate.c b/target/hppa/translate.c
index 1973923a18..5abe4cd610 100644
--- a/target/hppa/translate.c
+++ b/target/hppa/translate.c
@@ -2384,20 +2384,27 @@ static void trans_rfi_r(DisasContext *ctx, arg_rfi_r 
*a, uint32_t insn)
 do_rfi(ctx, true);
 }
 
-#ifndef CONFIG_USER_ONLY
-static void gen_hlt(DisasContext *ctx, int reset)
+static void trans_halt(DisasContext *ctx, arg_halt *a, uint32_t insn)
 {
 CHECK_MOST_PRIVILEGED(EXCP_PRIV_OPR);
+#ifndef CONFIG_USER_ONLY
 nullify_over(ctx);
-if (reset) {
-gen_helper_reset(cpu_env);
-} else {
-gen_helper_halt(cpu_env);
-}
+gen_helper_halt(cpu_env);
 ctx->base.is_jmp = DISAS_NORETURN;
 nullify_end(ctx);
+#endif
+}
+
+static void trans_reset(DisasContext *ctx, arg_reset *a, uint32_t insn)
+{
+CHECK_MOST_PRIVILEGED(EXCP_PRIV_OPR);
+#ifndef CONFIG_USER_ONLY
+nullify_over(ctx);
+gen_helper_reset(cpu_env);
+ctx->base.is_jmp = DISAS_NORETURN;
+nullify_end(ctx);
+#endif
 }
-#endif /* !CONFIG_USER_ONLY */
 
 static void trans_nop_addrx(DisasContext *ctx, arg_ldst *a, uint32_t insn)
 {
@@ -4126,32 +4133,6 @@ static void translate_one(DisasContext *ctx, uint32_t 
insn)
 case 0x2E:
 translate_table(ctx, insn, table_fp_fused);
 return;
-
-case 0x04: /* spopn */
-case 0x05: /* diag */
-case 0x0F: /* product specific */
-break;
-
-case 0x07: /* unassigned */
-case 0x15: /* unassigned */
-case 0x1D: /* unassigned */
-case 0x37: /* unassigned */
-break;
-case 0x3F:
-#ifndef CONFIG_USER_ONLY
-/* Unassigned, but use as system-halt.  */
-if (insn == 0xfffdead0) {
-gen_hlt(ctx, 0); /* halt system */
-return;
-}
-if (insn == 0xfffdead1) {
-gen_hlt(ctx, 1); /* reset system */
-return;
-}
-#endif
-break;
-default:
-break;
 }
 gen_illegal(ctx);
 }
diff --git a/target/hppa/insns.decode b/target/hppa/insns.decode
index 1e4579e080..ddbbaefd83 100644
--- a/target/hppa/insns.decode
+++ b/target/hppa/insns.decode
@@ -101,6 +101,11 @@ ssm00 ..  000 01101011 t:5 
i=%sm_imm
 rfi00 - - --- 0110 0
 rfi_r  00 - - --- 01100101 0
 
+# These are artificial instructions used by QEMU firmware.
+# They are allocated from the unassigned instruction space.
+halt      1101 1110 1010 1101 
+reset     1101 1110 1010 1101 0001
+
 
 # Memory Management
 
-- 
2.14.3




[Qemu-devel] [PATCH 15/19] target/hppa: Convert fp indexed memory insns

2018-02-17 Thread Richard Henderson
Signed-off-by: Richard Henderson 
---
 target/hppa/translate.c  | 93 
 target/hppa/insns.decode | 21 +++
 2 files changed, 21 insertions(+), 93 deletions(-)

diff --git a/target/hppa/translate.c b/target/hppa/translate.c
index 6f97f7330e..1973923a18 100644
--- a/target/hppa/translate.c
+++ b/target/hppa/translate.c
@@ -861,15 +861,6 @@ static void gen_goto_tb(DisasContext *ctx, int which,
 }
 }
 
-/* PA has a habit of taking the LSB of a field and using that as the sign,
-   with the rest of the field becoming the least significant bits.  */
-static target_sreg low_sextract(uint32_t val, int pos, int len)
-{
-target_ureg x = -(target_ureg)extract32(val, pos, 1);
-x = (x << (len - 1)) | extract32(val, pos + 1, len - 1);
-return x;
-}
-
 static unsigned assemble_rt64(uint32_t insn)
 {
 unsigned r1 = extract32(insn, 6, 1);
@@ -2982,84 +2973,6 @@ static void trans_ldo(DisasContext *ctx, arg_ldo *a, 
uint32_t insn)
 cond_free(>null_cond);
 }
 
-static void trans_copr_w(DisasContext *ctx, uint32_t insn)
-{
-unsigned t0 = extract32(insn, 0, 5);
-unsigned m = extract32(insn, 5, 1);
-unsigned t1 = extract32(insn, 6, 1);
-unsigned ext3 = extract32(insn, 7, 3);
-/* unsigned cc = extract32(insn, 10, 2); */
-unsigned i = extract32(insn, 12, 1);
-unsigned ua = extract32(insn, 13, 1);
-unsigned sp = extract32(insn, 14, 2);
-unsigned rx = extract32(insn, 16, 5);
-unsigned rb = extract32(insn, 21, 5);
-unsigned rt = t1 * 32 + t0;
-int modify = (m ? (ua ? -1 : 1) : 0);
-int disp, scale;
-
-if (i == 0) {
-scale = (ua ? 2 : 0);
-disp = 0;
-modify = m;
-} else {
-disp = low_sextract(rx, 0, 5);
-scale = 0;
-rx = 0;
-modify = (m ? (ua ? -1 : 1) : 0);
-}
-
-switch (ext3) {
-case 0: /* FLDW */
-do_floadw(ctx, rt, rb, rx, scale, disp, sp, modify);
-break;
-case 4: /* FSTW */
-do_fstorew(ctx, rt, rb, rx, scale, disp, sp, modify);
-break;
-default:
-gen_illegal(ctx);
-break;
-}
-}
-
-static void trans_copr_dw(DisasContext *ctx, uint32_t insn)
-{
-unsigned rt = extract32(insn, 0, 5);
-unsigned m = extract32(insn, 5, 1);
-unsigned ext4 = extract32(insn, 6, 4);
-/* unsigned cc = extract32(insn, 10, 2); */
-unsigned i = extract32(insn, 12, 1);
-unsigned ua = extract32(insn, 13, 1);
-unsigned sp = extract32(insn, 14, 2);
-unsigned rx = extract32(insn, 16, 5);
-unsigned rb = extract32(insn, 21, 5);
-int modify = (m ? (ua ? -1 : 1) : 0);
-int disp, scale;
-
-if (i == 0) {
-scale = (ua ? 3 : 0);
-disp = 0;
-modify = m;
-} else {
-disp = low_sextract(rx, 0, 5);
-scale = 0;
-rx = 0;
-modify = (m ? (ua ? -1 : 1) : 0);
-}
-
-switch (ext4) {
-case 0: /* FLDD */
-do_floadd(ctx, rt, rb, rx, scale, disp, sp, modify);
-break;
-case 8: /* FSTD */
-do_fstored(ctx, rt, rb, rx, scale, disp, sp, modify);
-break;
-default:
-gen_illegal(ctx);
-break;
-}
-}
-
 static void do_cmpb(DisasContext *ctx, unsigned r, TCGv_reg in1,
 unsigned c, unsigned f, unsigned n, int disp)
 {
@@ -4203,12 +4116,6 @@ static void translate_one(DisasContext *ctx, uint32_t 
insn)
 
 opc = extract32(insn, 26, 6);
 switch (opc) {
-case 0x09:
-trans_copr_w(ctx, insn);
-return;
-case 0x0B:
-trans_copr_dw(ctx, insn);
-return;
 case 0x0C:
 translate_table(ctx, insn, table_float_0c);
 return;
diff --git a/target/hppa/insns.decode b/target/hppa/insns.decode
index 9a51e59de0..1e4579e080 100644
--- a/target/hppa/insns.decode
+++ b/target/hppa/insns.decode
@@ -38,6 +38,7 @@
 %sm_imm16:10 !function=expand_sm_imm
 
 %rm64  1:1 16:5
+%rt64  6:1 0:5
 
 %im5_0 0:s1 1:4
 %im5_1616:s1 17:4
@@ -193,6 +194,26 @@ lda11 . . .. . 0 -- 0110  
..   @ldstx size=2
 sta11 . . .. . 1 -- 1110  ..   @stim5 size=2
 stby   11 b:5 r:5 sp:2 a:1 1 -- 1100 m:1   .   disp=%im5_0
 
+@fldstwx   .. b:5 x:5   sp:2 scale:1 ... m:1 . \
+t=%rt64 disp=0 size=2
+@fldstwi   .. b:5 . sp:2 .   ... .   . \
+t=%rt64 disp=%im5_16 m=%ma_to_m x=0 scale=0 size=2
+
+fldw   001001 . . .. . 0 -- 000 . . .  @fldstwx
+fldw   001001 . . .. . 1 -- 000 . . .  @fldstwi
+fstw   001001 . . .. . 0 -- 100 . . .  @fldstwx
+fstw   001001 . . .. . 1 -- 100 . . .  @fldstwi
+
+@fldstdx   .. b:5 x:5   sp:2 scale:1 ... m:1 t:5 \
+disp=0 size=3
+@fldstdi   .. b:5 

[Qemu-devel] [PATCH 14/19] target/hppa: Convert offset memory insns

2018-02-17 Thread Richard Henderson
Signed-off-by: Richard Henderson 
---
 target/hppa/translate.c  | 193 ++-
 target/hppa/insns.decode |  49 
 2 files changed, 88 insertions(+), 154 deletions(-)

diff --git a/target/hppa/translate.c b/target/hppa/translate.c
index 51bd9016ab..6f97f7330e 100644
--- a/target/hppa/translate.c
+++ b/target/hppa/translate.c
@@ -315,12 +315,29 @@ static int ma_to_m(int val)
 return val & 2 ? (val & 1 ? -1 : 1) : 0;
 }
 
-/* Used for branch targets.  */
+/* Covert the sign of the displacement to a pre or post-modify.  */
+static int pos_to_m(int val)
+{
+return val ? 1 : -1;
+}
+
+static int neg_to_m(int val)
+{
+return val ? -1 : 1;
+}
+
+/* Used for branch targets and fp memory ops.  */
 static int expand_shl2(int val)
 {
 return val << 2;
 }
 
+/* Used for fp memory ops.  */
+static int expand_shl3(int val)
+{
+return val << 3;
+}
+
 /* Used for assemble_21.  */
 static int expand_shl11(int val)
 {
@@ -889,24 +906,6 @@ static inline unsigned assemble_sr3(uint32_t insn)
 return s2 * 4 + s0;
 }
 
-static target_sreg assemble_16(uint32_t insn)
-{
-/* Take the name from PA2.0, which produces a 16-bit number
-   only with wide mode; otherwise a 14-bit number.  Since we don't
-   implement wide mode, this is always the 14-bit number.  */
-return low_sextract(insn, 0, 14);
-}
-
-static target_sreg assemble_16a(uint32_t insn)
-{
-/* Take the name from PA2.0, which produces a 14-bit shifted number
-   only with wide mode; otherwise a 12-bit shifted number.  Since we
-   don't implement wide mode, this is always the 12-bit number.  */
-target_ureg x = -(target_ureg)(insn & 1);
-x = (x << 11) | extract32(insn, 2, 11);
-return x << 2;
-}
-
 /* The parisc documentation describes only the general interpretation of
the conditions, without describing their exact implementation.  The
interpretations do not stand up well when considering ADD,C and SUB,B.
@@ -1620,6 +1619,11 @@ static void do_floadw(DisasContext *ctx, unsigned rt, 
unsigned rb,
 nullify_end(ctx);
 }
 
+static void trans_fldw(DisasContext *ctx, arg_ldst *a, uint32_t insn)
+{
+do_floadw(ctx, a->t, a->b, a->x, a->scale * 4, a->disp, a->sp, a->m);
+}
+
 static void do_floadd(DisasContext *ctx, unsigned rt, unsigned rb,
   unsigned rx, int scale, target_sreg disp,
   unsigned sp, int modify)
@@ -1640,6 +1644,11 @@ static void do_floadd(DisasContext *ctx, unsigned rt, 
unsigned rb,
 nullify_end(ctx);
 }
 
+static void trans_fldd(DisasContext *ctx, arg_ldst *a, uint32_t insn)
+{
+do_floadd(ctx, a->t, a->b, a->x, a->scale * 8, a->disp, a->sp, a->m);
+}
+
 static void do_store(DisasContext *ctx, unsigned rt, unsigned rb,
  target_sreg disp, unsigned sp,
  int modify, TCGMemOp mop)
@@ -1664,6 +1673,11 @@ static void do_fstorew(DisasContext *ctx, unsigned rt, 
unsigned rb,
 nullify_end(ctx);
 }
 
+static void trans_fstw(DisasContext *ctx, arg_ldst *a, uint32_t insn)
+{
+do_fstorew(ctx, a->t, a->b, a->x, a->scale * 4, a->disp, a->sp, a->m);
+}
+
 static void do_fstored(DisasContext *ctx, unsigned rt, unsigned rb,
unsigned rx, int scale, target_sreg disp,
unsigned sp, int modify)
@@ -1679,6 +1693,11 @@ static void do_fstored(DisasContext *ctx, unsigned rt, 
unsigned rb,
 nullify_end(ctx);
 }
 
+static void trans_fstd(DisasContext *ctx, arg_ldst *a, uint32_t insn)
+{
+do_fstored(ctx, a->t, a->b, a->x, a->scale * 8, a->disp, a->sp, a->m);
+}
+
 static void do_fop_wew(DisasContext *ctx, unsigned rt, unsigned ra,
void (*func)(TCGv_i32, TCGv_env, TCGv_i32))
 {
@@ -2846,7 +2865,7 @@ static void trans_ld(DisasContext *ctx, arg_ldst *a, 
uint32_t insn)
 
 static void trans_st(DisasContext *ctx, arg_ldst *a, uint32_t insn)
 {
-assert(a->scale == 0);
+assert(a->x == 0 && a->scale == 0);
 do_store(ctx, a->t, a->b, a->disp, a->sp, a->m, a->size | MO_TE);
 }
 
@@ -2963,103 +2982,6 @@ static void trans_ldo(DisasContext *ctx, arg_ldo *a, 
uint32_t insn)
 cond_free(>null_cond);
 }
 
-static void trans_load(DisasContext *ctx, uint32_t insn,
-   bool is_mod, TCGMemOp mop)
-{
-unsigned rb = extract32(insn, 21, 5);
-unsigned rt = extract32(insn, 16, 5);
-unsigned sp = extract32(insn, 14, 2);
-target_sreg i = assemble_16(insn);
-
-do_load(ctx, rt, rb, 0, 0, i, sp, is_mod ? (i < 0 ? -1 : 1) : 0, mop);
-}
-
-static void trans_load_w(DisasContext *ctx, uint32_t insn)
-{
-unsigned rb = extract32(insn, 21, 5);
-unsigned rt = extract32(insn, 16, 5);
-unsigned sp = extract32(insn, 14, 2);
-target_sreg i = assemble_16a(insn);
-unsigned ext2 = extract32(insn, 1, 2);
-
-switch (ext2) {
-case 0:
-case 1:
-/* FLDW without modification.  */
-do_floadw(ctx, ext2 * 32 + 

[Qemu-devel] [PATCH 08/19] target/hppa: Convert indexed memory insns

2018-02-17 Thread Richard Henderson
Signed-off-by: Richard Henderson 
---
 target/hppa/translate.c  | 157 ++-
 target/hppa/insns.decode |  24 
 2 files changed, 56 insertions(+), 125 deletions(-)

diff --git a/target/hppa/translate.c b/target/hppa/translate.c
index 91617bf9ad..792e838849 100644
--- a/target/hppa/translate.c
+++ b/target/hppa/translate.c
@@ -308,6 +308,13 @@ static int expand_sr3x(int val)
 return ~val;
 }
 
+/* Convert the M:A bits within a memory insn to the tri-state value
+   we use for the final M.  */
+static int ma_to_m(int val)
+{
+return val & 2 ? (val & 1 ? -1 : 1) : 0;
+}
+
 /* Include the auto-generated decoder.  */
 #include "decode.inc.c"
 
@@ -2842,116 +2849,57 @@ static void trans_cmpiclr(DisasContext *ctx, uint32_t 
insn)
 nullify_end(ctx);
 }
 
-static void trans_ld_idx_i(DisasContext *ctx, uint32_t insn,
-   const DisasInsn *di)
+static void trans_ld(DisasContext *ctx, arg_ldst *a, uint32_t insn)
 {
-unsigned rt = extract32(insn, 0, 5);
-unsigned m = extract32(insn, 5, 1);
-unsigned sz = extract32(insn, 6, 2);
-unsigned a = extract32(insn, 13, 1);
-unsigned sp = extract32(insn, 14, 2);
-int disp = low_sextract(insn, 16, 5);
-unsigned rb = extract32(insn, 21, 5);
-int modify = (m ? (a ? -1 : 1) : 0);
-TCGMemOp mop = MO_TE | sz;
-
-do_load(ctx, rt, rb, 0, 0, disp, sp, modify, mop);
+do_load(ctx, a->t, a->b, a->x, a->scale * a->size,
+a->disp, a->sp, a->m, a->size | MO_TE);
 }
 
-static void trans_ld_idx_x(DisasContext *ctx, uint32_t insn,
-   const DisasInsn *di)
+static void trans_st(DisasContext *ctx, arg_ldst *a, uint32_t insn)
 {
-unsigned rt = extract32(insn, 0, 5);
-unsigned m = extract32(insn, 5, 1);
-unsigned sz = extract32(insn, 6, 2);
-unsigned u = extract32(insn, 13, 1);
-unsigned sp = extract32(insn, 14, 2);
-unsigned rx = extract32(insn, 16, 5);
-unsigned rb = extract32(insn, 21, 5);
-TCGMemOp mop = MO_TE | sz;
-
-do_load(ctx, rt, rb, rx, u ? sz : 0, 0, sp, m, mop);
+assert(a->scale == 0);
+do_store(ctx, a->t, a->b, a->disp, a->sp, a->m, a->size | MO_TE);
 }
 
-static void trans_st_idx_i(DisasContext *ctx, uint32_t insn,
-   const DisasInsn *di)
+static void trans_ldc(DisasContext *ctx, arg_ldst *a, uint32_t insn)
 {
-int disp = low_sextract(insn, 0, 5);
-unsigned m = extract32(insn, 5, 1);
-unsigned sz = extract32(insn, 6, 2);
-unsigned a = extract32(insn, 13, 1);
-unsigned sp = extract32(insn, 14, 2);
-unsigned rr = extract32(insn, 16, 5);
-unsigned rb = extract32(insn, 21, 5);
-int modify = (m ? (a ? -1 : 1) : 0);
-TCGMemOp mop = MO_TE | sz;
-
-do_store(ctx, rr, rb, disp, sp, modify, mop);
-}
-
-static void trans_ldcw(DisasContext *ctx, uint32_t insn, const DisasInsn *di)
-{
-unsigned rt = extract32(insn, 0, 5);
-unsigned m = extract32(insn, 5, 1);
-unsigned i = extract32(insn, 12, 1);
-unsigned au = extract32(insn, 13, 1);
-unsigned sp = extract32(insn, 14, 2);
-unsigned rx = extract32(insn, 16, 5);
-unsigned rb = extract32(insn, 21, 5);
-TCGMemOp mop = MO_TEUL | MO_ALIGN_16;
+TCGMemOp mop = MO_TEUL | MO_ALIGN_16 | a->size;
 TCGv_reg zero, dest, ofs;
 TCGv_tl addr;
-int modify, disp = 0, scale = 0;
 
 nullify_over(ctx);
 
-if (i) {
-modify = (m ? (au ? -1 : 1) : 0);
-disp = low_sextract(rx, 0, 5);
-rx = 0;
-} else {
-modify = m;
-if (au) {
-scale = mop & MO_SIZE;
-}
-}
-if (modify) {
+if (a->m) {
 /* Base register modification.  Make sure if RT == RB,
we see the result of the load.  */
 dest = get_temp(ctx);
 } else {
-dest = dest_gpr(ctx, rt);
+dest = dest_gpr(ctx, a->t);
 }
 
-form_gva(ctx, , , rb, rx, scale, disp, sp, modify,
- ctx->mmu_idx == MMU_PHYS_IDX);
+form_gva(ctx, , , a->b, a->x, a->scale * a->size,
+ a->disp, a->sp, a->m, ctx->mmu_idx == MMU_PHYS_IDX);
 zero = tcg_const_reg(0);
 tcg_gen_atomic_xchg_reg(dest, addr, zero, ctx->mmu_idx, mop);
-if (modify) {
-save_gpr(ctx, rb, ofs);
+if (a->m) {
+save_gpr(ctx, a->b, ofs);
 }
-save_gpr(ctx, rt, dest);
+save_gpr(ctx, a->t, dest);
 
 nullify_end(ctx);
 }
 
-static void trans_stby(DisasContext *ctx, uint32_t insn, const DisasInsn *di)
+static void trans_stby(DisasContext *ctx, arg_stby *a, uint32_t insn)
 {
-target_sreg disp = low_sextract(insn, 0, 5);
-unsigned m = extract32(insn, 5, 1);
-unsigned a = extract32(insn, 13, 1);
-unsigned sp = extract32(insn, 14, 2);
-unsigned rt = extract32(insn, 16, 5);
-unsigned rb = extract32(insn, 21, 5);
 TCGv_reg ofs, val;
 TCGv_tl addr;
 
 nullify_over(ctx);
 
-form_gva(ctx, , , rb, 0, 0, disp, sp, 

[Qemu-devel] [PATCH 13/19] target/hppa: Convert arithmetic immediate insns

2018-02-17 Thread Richard Henderson
Signed-off-by: Richard Henderson 
---
 target/hppa/translate.c  | 168 +--
 target/hppa/insns.decode |  21 ++
 2 files changed, 96 insertions(+), 93 deletions(-)

diff --git a/target/hppa/translate.c b/target/hppa/translate.c
index 5df5b8dba4..51bd9016ab 100644
--- a/target/hppa/translate.c
+++ b/target/hppa/translate.c
@@ -321,6 +321,12 @@ static int expand_shl2(int val)
 return val << 2;
 }
 
+/* Used for assemble_21.  */
+static int expand_shl11(int val)
+{
+return val << 11;
+}
+
 
 /* Include the auto-generated decoder.  */
 #include "decode.inc.c"
@@ -901,16 +907,6 @@ static target_sreg assemble_16a(uint32_t insn)
 return x << 2;
 }
 
-static target_sreg assemble_21(uint32_t insn)
-{
-target_ureg x = -(target_ureg)(insn & 1);
-x = (x << 11) | extract32(insn, 1, 11);
-x = (x <<  2) | extract32(insn, 14, 2);
-x = (x <<  5) | extract32(insn, 16, 5);
-x = (x <<  2) | extract32(insn, 12, 2);
-return x << 11;
-}
-
 /* The parisc documentation describes only the general interpretation of
the conditions, without describing their exact implementation.  The
interpretations do not stand up well when considering ADD,C and SUB,B.
@@ -1225,6 +1221,20 @@ static void do_add_reg(DisasContext *ctx, arg_rrr_cf_sh 
*a,
 nullify_end(ctx);
 }
 
+static void do_add_imm(DisasContext *ctx, arg_rri_cf *a,
+   bool is_tsv, bool is_tc)
+{
+TCGv_reg tcg_im, tcg_r2;
+
+if (a->cf) {
+nullify_over(ctx);
+}
+tcg_im = load_const(ctx, a->i);
+tcg_r2 = load_gpr(ctx, a->r);
+do_add(ctx, a->t, tcg_im, tcg_r2, 0, 0, is_tsv, is_tc, 0, a->cf);
+nullify_end(ctx);
+}
+
 static void do_sub(DisasContext *ctx, unsigned rt, TCGv_reg in1,
TCGv_reg in2, bool is_tsv, bool is_b,
bool is_tc, unsigned cf)
@@ -1305,6 +1315,19 @@ static void do_sub_reg(DisasContext *ctx, arg_rrr_cf *a,
 nullify_end(ctx);
 }
 
+static void do_sub_imm(DisasContext *ctx, arg_rri_cf *a, bool is_tsv)
+{
+TCGv_reg tcg_im, tcg_r2;
+
+if (a->cf) {
+nullify_over(ctx);
+}
+tcg_im = load_const(ctx, a->i);
+tcg_r2 = load_gpr(ctx, a->r);
+do_sub(ctx, a->t, tcg_im, tcg_r2, is_tsv, 0, 0, a->cf);
+nullify_end(ctx);
+}
+
 static void do_cmpclr(DisasContext *ctx, unsigned rt, TCGv_reg in1,
   TCGv_reg in2, unsigned cf)
 {
@@ -2770,62 +2793,47 @@ static void trans_ds(DisasContext *ctx, arg_rrr_cf *a, 
uint32_t insn)
 nullify_end(ctx);
 }
 
-static void trans_addi(DisasContext *ctx, uint32_t insn)
+static void trans_addi(DisasContext *ctx, arg_rri_cf *a, uint32_t insn)
 {
-target_sreg im = low_sextract(insn, 0, 11);
-unsigned e1 = extract32(insn, 11, 1);
-unsigned cf = extract32(insn, 12, 4);
-unsigned rt = extract32(insn, 16, 5);
-unsigned r2 = extract32(insn, 21, 5);
-unsigned o1 = extract32(insn, 26, 1);
-TCGv_reg tcg_im, tcg_r2;
-
-if (cf) {
-nullify_over(ctx);
-}
-
-tcg_im = load_const(ctx, im);
-tcg_r2 = load_gpr(ctx, r2);
-do_add(ctx, rt, tcg_im, tcg_r2, 0, false, e1, !o1, false, cf);
-
-nullify_end(ctx);
+do_add_imm(ctx, a, false, false);
 }
 
-static void trans_subi(DisasContext *ctx, uint32_t insn)
+static void trans_addi_tsv(DisasContext *ctx, arg_rri_cf *a, uint32_t insn)
 {
-target_sreg im = low_sextract(insn, 0, 11);
-unsigned e1 = extract32(insn, 11, 1);
-unsigned cf = extract32(insn, 12, 4);
-unsigned rt = extract32(insn, 16, 5);
-unsigned r2 = extract32(insn, 21, 5);
-TCGv_reg tcg_im, tcg_r2;
-
-if (cf) {
-nullify_over(ctx);
-}
-
-tcg_im = load_const(ctx, im);
-tcg_r2 = load_gpr(ctx, r2);
-do_sub(ctx, rt, tcg_im, tcg_r2, e1, false, false, cf);
-
-nullify_end(ctx);
+do_add_imm(ctx, a, true, false);
 }
 
-static void trans_cmpiclr(DisasContext *ctx, uint32_t insn)
+static void trans_addi_tc(DisasContext *ctx, arg_rri_cf *a, uint32_t insn)
+{
+do_add_imm(ctx, a, false, true);
+}
+
+static void trans_addi_tc_tsv(DisasContext *ctx, arg_rri_cf *a, uint32_t insn)
+{
+do_add_imm(ctx, a, true, true);
+}
+
+static void trans_subi(DisasContext *ctx, arg_rri_cf *a, uint32_t insn)
+{
+do_sub_imm(ctx, a, false);
+}
+
+static void trans_subi_tsv(DisasContext *ctx, arg_rri_cf *a, uint32_t insn)
+{
+do_sub_imm(ctx, a, true);
+}
+
+static void trans_cmpiclr(DisasContext *ctx, arg_rri_cf *a, uint32_t insn)
 {
-target_sreg im = low_sextract(insn, 0, 11);
-unsigned cf = extract32(insn, 12, 4);
-unsigned rt = extract32(insn, 16, 5);
-unsigned r2 = extract32(insn, 21, 5);
 TCGv_reg tcg_im, tcg_r2;
 
-if (cf) {
+if (a->cf) {
 nullify_over(ctx);
 }
 
-tcg_im = load_const(ctx, im);
-tcg_r2 = load_gpr(ctx, r2);
-do_cmpclr(ctx, rt, tcg_im, tcg_r2, cf);
+tcg_im = load_const(ctx, a->i);
+tcg_r2 = load_gpr(ctx, a->r);
+

[Qemu-devel] [PATCH 01/19] target/hppa: Use DisasContextBase.is_jmp

2018-02-17 Thread Richard Henderson
Instead of returning DisasJumpType, immediately store it.

Signed-off-by: Richard Henderson 
---
 target/hppa/translate.c | 971 
 1 file changed, 487 insertions(+), 484 deletions(-)

diff --git a/target/hppa/translate.c b/target/hppa/translate.c
index 6499b392f9..f72bc84873 100644
--- a/target/hppa/translate.c
+++ b/target/hppa/translate.c
@@ -290,10 +290,6 @@ typedef struct DisasContext {
 bool psw_n_nonzero;
 } DisasContext;
 
-/* Target-specific return values from translate_one, indicating the
-   state of the TB.  Note that DISAS_NEXT indicates that we are not
-   exiting the TB.  */
-
 /* We are not using a goto_tb (for whatever reason), but have updated
the iaq (for whatever reason), so don't do it again on exit.  */
 #define DISAS_IAQ_N_UPDATED  DISAS_TARGET_0
@@ -308,8 +304,8 @@ typedef struct DisasContext {
 
 typedef struct DisasInsn {
 uint32_t insn, mask;
-DisasJumpType (*trans)(DisasContext *ctx, uint32_t insn,
-   const struct DisasInsn *f);
+void (*trans)(DisasContext *ctx, uint32_t insn,
+  const struct DisasInsn *f);
 union {
 void (*ttt)(TCGv_reg, TCGv_reg, TCGv_reg);
 void (*weww)(TCGv_i32, TCGv_env, TCGv_i32, TCGv_i32);
@@ -678,9 +674,10 @@ static void nullify_set(DisasContext *ctx, bool x)
 
 /* Mark the end of an instruction that may have been nullified.
This is the pair to nullify_over.  */
-static DisasJumpType nullify_end(DisasContext *ctx, DisasJumpType status)
+static void nullify_end(DisasContext *ctx)
 {
 TCGLabel *null_lab = ctx->null_lab;
+DisasJumpType status = ctx->base.is_jmp;
 
 /* For NEXT, NORETURN, STALE, we can easily continue (or exit).
For UPDATED, we cannot update on the nullified path.  */
@@ -690,7 +687,7 @@ static DisasJumpType nullify_end(DisasContext *ctx, 
DisasJumpType status)
 /* The current insn wasn't conditional or handled the condition
applied to it without a branch, so the (new) setting of
NULL_COND can be applied directly to the next insn.  */
-return status;
+return;
 }
 ctx->null_lab = NULL;
 
@@ -708,9 +705,8 @@ static DisasJumpType nullify_end(DisasContext *ctx, 
DisasJumpType status)
 ctx->null_cond = cond_make_n();
 }
 if (status == DISAS_NORETURN) {
-status = DISAS_NEXT;
+ctx->base.is_jmp = DISAS_NEXT;
 }
-return status;
 }
 
 static void copy_iaoq_entry(TCGv_reg dest, target_ureg ival, TCGv_reg vval)
@@ -734,41 +730,45 @@ static void gen_excp_1(int exception)
 tcg_temp_free_i32(t);
 }
 
-static DisasJumpType gen_excp(DisasContext *ctx, int exception)
+static void gen_excp(DisasContext *ctx, int exception)
 {
 copy_iaoq_entry(cpu_iaoq_f, ctx->iaoq_f, cpu_iaoq_f);
 copy_iaoq_entry(cpu_iaoq_b, ctx->iaoq_b, cpu_iaoq_b);
 nullify_save(ctx);
 gen_excp_1(exception);
-return DISAS_NORETURN;
+ctx->base.is_jmp = DISAS_NORETURN;
 }
 
-static DisasJumpType gen_excp_iir(DisasContext *ctx, int exc)
+static void gen_excp_iir(DisasContext *ctx, int exc)
 {
 TCGv_reg tmp = tcg_const_reg(ctx->insn);
 tcg_gen_st_reg(tmp, cpu_env, offsetof(CPUHPPAState, cr[CR_IIR]));
 tcg_temp_free(tmp);
-return gen_excp(ctx, exc);
+gen_excp(ctx, exc);
 }
 
-static DisasJumpType gen_illegal(DisasContext *ctx)
+static void gen_illegal(DisasContext *ctx)
 {
 nullify_over(ctx);
-return nullify_end(ctx, gen_excp_iir(ctx, EXCP_ILL));
+gen_excp_iir(ctx, EXCP_ILL);
+nullify_end(ctx);
 }
 
-#define CHECK_MOST_PRIVILEGED(EXCP)   \
-do {  \
-if (ctx->privilege != 0) {\
-nullify_over(ctx);\
-return nullify_end(ctx, gen_excp_iir(ctx, EXCP)); \
-} \
+#define CHECK_MOST_PRIVILEGED(EXCP)  \
+do { \
+if (ctx->privilege != 0) {   \
+nullify_over(ctx);   \
+gen_excp_iir(ctx, EXCP); \
+nullify_end(ctx);\
+return;  \
+}\
 } while (0)
 
 static bool use_goto_tb(DisasContext *ctx, target_ureg dest)
 {
 /* Suppress goto_tb in the case of single-steping and IO.  */
-if ((tb_cflags(ctx->base.tb) & CF_LAST_IO) || 
ctx->base.singlestep_enabled) {
+if ((tb_cflags(ctx->base.tb) & CF_LAST_IO)
+|| ctx->base.singlestep_enabled) {
 return false;
 }
 return true;
@@ -1131,9 +1131,9 @@ static TCGv_reg do_sub_sv(DisasContext *ctx, TCGv_reg res,
 return sv;
 }
 
-static DisasJumpType do_add(DisasContext *ctx, unsigned rt, TCGv_reg in1,
-TCGv_reg in2, unsigned shift, bool is_l,

[Qemu-devel] [PATCH 06/19] target/hppa: Convert memory management insns

2018-02-17 Thread Richard Henderson
Signed-off-by: Richard Henderson 
---
 target/hppa/translate.c  | 159 +++
 target/hppa/insns.decode |  38 +++
 2 files changed, 88 insertions(+), 109 deletions(-)

diff --git a/target/hppa/translate.c b/target/hppa/translate.c
index 074234b1e0..ca46e8d50b 100644
--- a/target/hppa/translate.c
+++ b/target/hppa/translate.c
@@ -302,6 +302,12 @@ static int expand_sm_imm(int val)
 return val;
 }
 
+/* Inverted space register indicates 0 means sr0 not inferred from base.  */
+static int expand_sr3x(int val)
+{
+return ~val;
+}
+
 /* Include the auto-generated decoder.  */
 #include "decode.inc.c"
 
@@ -2007,7 +2013,7 @@ static void do_page_zero(DisasContext *ctx)
 }
 #endif
 
-static void trans_nop(DisasContext *ctx, uint32_t insn, const DisasInsn *di)
+static void trans_nop(DisasContext *ctx, arg_nop *a, uint32_t insn)
 {
 cond_free(>null_cond);
 }
@@ -2330,30 +2336,23 @@ static void gen_hlt(DisasContext *ctx, int reset)
 }
 #endif /* !CONFIG_USER_ONLY */
 
-static void trans_base_idx_mod(DisasContext *ctx, uint32_t insn,
-   const DisasInsn *di)
+static void trans_nop_addrx(DisasContext *ctx, arg_ldst *a, uint32_t insn)
 {
-unsigned rb = extract32(insn, 21, 5);
-unsigned rx = extract32(insn, 16, 5);
-TCGv_reg dest = dest_gpr(ctx, rb);
-TCGv_reg src1 = load_gpr(ctx, rb);
-TCGv_reg src2 = load_gpr(ctx, rx);
-
-/* The only thing we need to do is the base register modification.  */
-tcg_gen_add_reg(dest, src1, src2);
-save_gpr(ctx, rb, dest);
+if (a->m) {
+TCGv_reg dest = dest_gpr(ctx, a->b);
+TCGv_reg src1 = load_gpr(ctx, a->b);
+TCGv_reg src2 = load_gpr(ctx, a->x);
 
+/* The only thing we need to do is the base register modification.  */
+tcg_gen_add_reg(dest, src1, src2);
+save_gpr(ctx, a->b, dest);
+}
 cond_free(>null_cond);
 }
 
-static void trans_probe(DisasContext *ctx, uint32_t insn, const DisasInsn *di)
+static void trans_probe(DisasContext *ctx, arg_probe *a, uint32_t insn)
 {
-unsigned rt = extract32(insn, 0, 5);
-unsigned sp = extract32(insn, 14, 2);
-unsigned rr = extract32(insn, 16, 5);
-unsigned rb = extract32(insn, 21, 5);
-unsigned is_write = extract32(insn, 6, 1);
-unsigned is_imm = extract32(insn, 13, 1);
+unsigned rt = a->t;
 TCGv_reg dest, ofs;
 TCGv_i32 level, want;
 TCGv_tl addr;
@@ -2361,16 +2360,16 @@ static void trans_probe(DisasContext *ctx, uint32_t 
insn, const DisasInsn *di)
 nullify_over(ctx);
 
 dest = dest_gpr(ctx, rt);
-form_gva(ctx, , , rb, 0, 0, 0, sp, 0, false);
+form_gva(ctx, , , a->b, 0, 0, 0, a->sp, 0, false);
 
-if (is_imm) {
-level = tcg_const_i32(extract32(insn, 16, 2));
+if (a->imm) {
+level = tcg_const_i32(a->ri);
 } else {
 level = tcg_temp_new_i32();
-tcg_gen_trunc_reg_i32(level, load_gpr(ctx, rr));
+tcg_gen_trunc_reg_i32(level, load_gpr(ctx, a->ri));
 tcg_gen_andi_i32(level, level, 3);
 }
-want = tcg_const_i32(is_write ? PAGE_WRITE : PAGE_READ);
+want = tcg_const_i32(a->write ? PAGE_WRITE : PAGE_READ);
 
 gen_helper_probe(dest, cpu_env, addr, level, want);
 
@@ -2381,29 +2380,18 @@ static void trans_probe(DisasContext *ctx, uint32_t 
insn, const DisasInsn *di)
 nullify_end(ctx);
 }
 
-#ifndef CONFIG_USER_ONLY
-static void trans_ixtlbx(DisasContext *ctx, uint32_t insn, const DisasInsn *di)
+static void trans_ixtlbx(DisasContext *ctx, arg_ixtlbx *a, uint32_t insn)
 {
-unsigned sp;
-unsigned rr = extract32(insn, 16, 5);
-unsigned rb = extract32(insn, 21, 5);
-unsigned is_data = insn & 0x1000;
-unsigned is_addr = insn & 0x40;
+CHECK_MOST_PRIVILEGED(EXCP_PRIV_OPR);
+#ifndef CONFIG_USER_ONLY
 TCGv_tl addr;
 TCGv_reg ofs, reg;
 
-if (is_data) {
-sp = extract32(insn, 14, 2);
-} else {
-sp = ~assemble_sr3(insn);
-}
-
-CHECK_MOST_PRIVILEGED(EXCP_PRIV_OPR);
 nullify_over(ctx);
 
-form_gva(ctx, , , rb, 0, 0, 0, sp, 0, false);
-reg = load_gpr(ctx, rr);
-if (is_addr) {
+form_gva(ctx, , , a->b, 0, 0, 0, a->sp, 0, false);
+reg = load_gpr(ctx, a->r);
+if (a->addr) {
 gen_helper_itlba(cpu_env, addr, reg);
 } else {
 gen_helper_itlbp(cpu_env, addr, reg);
@@ -2411,80 +2399,67 @@ static void trans_ixtlbx(DisasContext *ctx, uint32_t 
insn, const DisasInsn *di)
 
 /* Exit TB for ITLB change if mmu is enabled.  This *should* not be
the case, since the OS TLB fill handler runs with mmu disabled.  */
-if (!is_data && (ctx->tb_flags & PSW_C)) {
+if (!a->data && (ctx->tb_flags & PSW_C)) {
 ctx->base.is_jmp = DISAS_IAQ_N_STALE;
 }
 nullify_end(ctx);
+#endif
 }
 
-static void trans_pxtlbx(DisasContext *ctx, uint32_t insn, const DisasInsn *di)
+static void trans_pxtlbx(DisasContext *ctx, arg_pxtlbx *a, 

[Qemu-devel] [PATCH 05/19] target/hppa: Unify specializations of OR

2018-02-17 Thread Richard Henderson
With decodetree.py, the specializations would conflict so we
must have a single entry point for all variants of OR.

Signed-off-by: Richard Henderson 
---
 target/hppa/translate.c | 108 +++-
 1 file changed, 60 insertions(+), 48 deletions(-)

diff --git a/target/hppa/translate.c b/target/hppa/translate.c
index ae5969be0b..074234b1e0 100644
--- a/target/hppa/translate.c
+++ b/target/hppa/translate.c
@@ -2634,20 +2634,70 @@ static void trans_log(DisasContext *ctx, uint32_t insn, 
const DisasInsn *di)
 nullify_end(ctx);
 }
 
-/* OR r,0,t -> COPY (according to gas) */
-static void trans_copy(DisasContext *ctx, uint32_t insn, const DisasInsn *di)
+static void trans_or(DisasContext *ctx, uint32_t insn, const DisasInsn *di)
 {
+unsigned r2 = extract32(insn, 21, 5);
 unsigned r1 = extract32(insn, 16, 5);
+unsigned cf = extract32(insn, 12, 4);
 unsigned rt = extract32(insn,  0, 5);
+TCGv_reg tcg_r1, tcg_r2;
 
-if (r1 == 0) {
-TCGv_reg dest = dest_gpr(ctx, rt);
-tcg_gen_movi_reg(dest, 0);
-save_gpr(ctx, rt, dest);
-} else {
-save_gpr(ctx, rt, cpu_gr[r1]);
+if (cf == 0) {
+if (rt == 0) { /* NOP */
+cond_free(>null_cond);
+return;
+}
+if (r2 == 0) { /* COPY */
+if (r1 == 0) {
+TCGv_reg dest = dest_gpr(ctx, rt);
+tcg_gen_movi_reg(dest, 0);
+save_gpr(ctx, rt, dest);
+} else {
+save_gpr(ctx, rt, cpu_gr[r1]);
+}
+cond_free(>null_cond);
+return;
+}
+#ifndef CONFIG_USER_ONLY
+/* These are QEMU extensions and are nops in the real architecture:
+ *
+ * or %r10,%r10,%r10 -- idle loop; wait for interrupt
+ * or %r31,%r31,%r31 -- death loop; offline cpu
+ *  currently implemented as idle.
+ */
+if ((rt == 10 || rt == 31) && r1 == rt && r2 == rt) { /* PAUSE */
+TCGv_i32 tmp;
+
+/* No need to check for supervisor, as userland can only pause
+   until the next timer interrupt.  */
+nullify_over(ctx);
+
+/* Advance the instruction queue.  */
+copy_iaoq_entry(cpu_iaoq_f, ctx->iaoq_b, cpu_iaoq_b);
+copy_iaoq_entry(cpu_iaoq_b, ctx->iaoq_n, ctx->iaoq_n_var);
+nullify_set(ctx, 0);
+
+/* Tell the qemu main loop to halt until this cpu has work.  */
+tmp = tcg_const_i32(1);
+tcg_gen_st_i32(tmp, cpu_env, -offsetof(HPPACPU, env) +
+ offsetof(CPUState, halted));
+tcg_temp_free_i32(tmp);
+gen_excp_1(EXCP_HALTED);
+ctx->base.is_jmp = DISAS_NORETURN;
+
+nullify_end(ctx);
+return;
+}
+#endif
 }
-cond_free(>null_cond);
+
+if (cf) {
+nullify_over(ctx);
+}
+tcg_r1 = load_gpr(ctx, r1);
+tcg_r2 = load_gpr(ctx, r2);
+do_log(ctx, rt, tcg_r1, tcg_r2, cf, tcg_gen_or_reg);
+nullify_end(ctx);
 }
 
 static void trans_cmpclr(DisasContext *ctx, uint32_t insn, const DisasInsn *di)
@@ -2792,48 +2842,10 @@ static void trans_ds(DisasContext *ctx, uint32_t insn, 
const DisasInsn *di)
 nullify_end(ctx);
 }
 
-#ifndef CONFIG_USER_ONLY
-/* These are QEMU extensions and are nops in the real architecture:
- *
- * or %r10,%r10,%r10 -- idle loop; wait for interrupt
- * or %r31,%r31,%r31 -- death loop; offline cpu
- *  currently implemented as idle.
- */
-static void trans_pause(DisasContext *ctx, uint32_t insn, const DisasInsn *di)
-{
-TCGv_i32 tmp;
-
-/* No need to check for supervisor, as userland can only pause
-   until the next timer interrupt.  */
-nullify_over(ctx);
-
-/* Advance the instruction queue.  */
-copy_iaoq_entry(cpu_iaoq_f, ctx->iaoq_b, cpu_iaoq_b);
-copy_iaoq_entry(cpu_iaoq_b, ctx->iaoq_n, ctx->iaoq_n_var);
-nullify_set(ctx, 0);
-
-/* Tell the qemu main loop to halt until this cpu has work.  */
-tmp = tcg_const_i32(1);
-tcg_gen_st_i32(tmp, cpu_env, -offsetof(HPPACPU, env) +
- offsetof(CPUState, halted));
-tcg_temp_free_i32(tmp);
-gen_excp_1(EXCP_HALTED);
-ctx->base.is_jmp = DISAS_NORETURN;
-
-nullify_end(ctx);
-}
-#endif
-
 static const DisasInsn table_arith_log[] = {
-{ 0x08000240u, 0xfc00u, trans_nop },  /* or x,y,0 */
-{ 0x08000240u, 0xffe0ffe0u, trans_copy }, /* or x,0,t */
-#ifndef CONFIG_USER_ONLY
-{ 0x094a024au, 0xu, trans_pause }, /* or r10,r10,r10 */
-{ 0x0bff025fu, 0xu, trans_pause }, /* or r31,r31,r31 */
-#endif
+{ 0x08000240u, 0xfc000fe0u, trans_or },
 { 0x0800u, 0xfc000fe0u, trans_log, .f.ttt = tcg_gen_andc_reg },
 { 0x08000200u, 0xfc000fe0u, trans_log, .f.ttt = tcg_gen_and_reg },
-{ 0x08000240u, 

[Qemu-devel] [PATCH 07/19] target/hppa: Convert arithmetic/logical insns

2018-02-17 Thread Richard Henderson
Signed-off-by: Richard Henderson 
---
 target/hppa/translate.c  | 337 ++-
 target/hppa/insns.decode |  40 ++
 2 files changed, 197 insertions(+), 180 deletions(-)

diff --git a/target/hppa/translate.c b/target/hppa/translate.c
index ca46e8d50b..91617bf9ad 100644
--- a/target/hppa/translate.c
+++ b/target/hppa/translate.c
@@ -1223,6 +1223,20 @@ static void do_add(DisasContext *ctx, unsigned rt, 
TCGv_reg in1,
 ctx->null_cond = cond;
 }
 
+static void do_add_reg(DisasContext *ctx, arg_rrr_cf_sh *a,
+   bool is_l, bool is_tsv, bool is_tc, bool is_c)
+{
+TCGv_reg tcg_r1, tcg_r2;
+
+if (a->cf) {
+nullify_over(ctx);
+}
+tcg_r1 = load_gpr(ctx, a->r1);
+tcg_r2 = load_gpr(ctx, a->r2);
+do_add(ctx, a->t, tcg_r1, tcg_r2, a->sh, is_l, is_tsv, is_tc, is_c, a->cf);
+nullify_end(ctx);
+}
+
 static void do_sub(DisasContext *ctx, unsigned rt, TCGv_reg in1,
TCGv_reg in2, bool is_tsv, bool is_b,
bool is_tc, unsigned cf)
@@ -1289,6 +1303,20 @@ static void do_sub(DisasContext *ctx, unsigned rt, 
TCGv_reg in1,
 ctx->null_cond = cond;
 }
 
+static void do_sub_reg(DisasContext *ctx, arg_rrr_cf *a,
+   bool is_tsv, bool is_b, bool is_tc)
+{
+TCGv_reg tcg_r1, tcg_r2;
+
+if (a->cf) {
+nullify_over(ctx);
+}
+tcg_r1 = load_gpr(ctx, a->r1);
+tcg_r2 = load_gpr(ctx, a->r2);
+do_sub(ctx, a->t, tcg_r1, tcg_r2, is_tsv, is_b, is_tc, a->cf);
+nullify_end(ctx);
+}
+
 static void do_cmpclr(DisasContext *ctx, unsigned rt, TCGv_reg in1,
   TCGv_reg in2, unsigned cf)
 {
@@ -1334,6 +1362,20 @@ static void do_log(DisasContext *ctx, unsigned rt, 
TCGv_reg in1,
 }
 }
 
+static void do_log_reg(DisasContext *ctx, arg_rrr_cf *a,
+   void (*fn)(TCGv_reg, TCGv_reg, TCGv_reg))
+{
+TCGv_reg tcg_r1, tcg_r2;
+
+if (a->cf) {
+nullify_over(ctx);
+}
+tcg_r1 = load_gpr(ctx, a->r1);
+tcg_r2 = load_gpr(ctx, a->r2);
+do_log(ctx, a->t, tcg_r1, tcg_r2, a->cf, fn);
+nullify_end(ctx);
+}
+
 static void do_unit(DisasContext *ctx, unsigned rt, TCGv_reg in1,
 TCGv_reg in2, unsigned cf, bool is_tc,
 void (*fn)(TCGv_reg, TCGv_reg, TCGv_reg))
@@ -2475,129 +2517,85 @@ static void trans_lci(DisasContext *ctx, arg_lci *a, 
uint32_t insn)
 cond_free(>null_cond);
 }
 
-static void trans_add(DisasContext *ctx, uint32_t insn, const DisasInsn *di)
+static void trans_add(DisasContext *ctx, arg_rrr_cf_sh *a, uint32_t insn)
 {
-unsigned r2 = extract32(insn, 21, 5);
-unsigned r1 = extract32(insn, 16, 5);
-unsigned cf = extract32(insn, 12, 4);
-unsigned ext = extract32(insn, 8, 4);
-unsigned shift = extract32(insn, 6, 2);
-unsigned rt = extract32(insn,  0, 5);
-TCGv_reg tcg_r1, tcg_r2;
-bool is_c = false;
-bool is_l = false;
-bool is_tc = false;
-bool is_tsv = false;
-
-switch (ext) {
-case 0x6: /* ADD, SHLADD */
-break;
-case 0xa: /* ADD,L, SHLADD,L */
-is_l = true;
-break;
-case 0xe: /* ADD,TSV, SHLADD,TSV (1) */
-is_tsv = true;
-break;
-case 0x7: /* ADD,C */
-is_c = true;
-break;
-case 0xf: /* ADD,C,TSV */
-is_c = is_tsv = true;
-break;
-default:
-gen_illegal(ctx);
-return;
-}
-
-if (cf) {
-nullify_over(ctx);
-}
-tcg_r1 = load_gpr(ctx, r1);
-tcg_r2 = load_gpr(ctx, r2);
-do_add(ctx, rt, tcg_r1, tcg_r2, shift, is_l, is_tsv, is_tc, is_c, cf);
-nullify_end(ctx);
+do_add_reg(ctx, a, false, false, false, false);
 }
 
-static void trans_sub(DisasContext *ctx, uint32_t insn, const DisasInsn *di)
+static void trans_add_l(DisasContext *ctx, arg_rrr_cf_sh *a, uint32_t insn)
 {
-unsigned r2 = extract32(insn, 21, 5);
-unsigned r1 = extract32(insn, 16, 5);
-unsigned cf = extract32(insn, 12, 4);
-unsigned ext = extract32(insn, 6, 6);
-unsigned rt = extract32(insn,  0, 5);
-TCGv_reg tcg_r1, tcg_r2;
-bool is_b = false;
-bool is_tc = false;
-bool is_tsv = false;
-
-switch (ext) {
-case 0x10: /* SUB */
-break;
-case 0x30: /* SUB,TSV */
-is_tsv = true;
-break;
-case 0x14: /* SUB,B */
-is_b = true;
-break;
-case 0x34: /* SUB,B,TSV */
-is_b = is_tsv = true;
-break;
-case 0x13: /* SUB,TC */
-is_tc = true;
-break;
-case 0x33: /* SUB,TSV,TC */
-is_tc = is_tsv = true;
-break;
-default:
-return gen_illegal(ctx);
-}
-
-if (cf) {
-nullify_over(ctx);
-}
-tcg_r1 = load_gpr(ctx, r1);
-tcg_r2 = load_gpr(ctx, r2);
-do_sub(ctx, rt, tcg_r1, tcg_r2, is_tsv, is_b, is_tc, cf);
-nullify_end(ctx);
+do_add_reg(ctx, a, true, false, false, false);
 }
 
-static void 

[Qemu-devel] [PATCH 02/19] target/hppa: Begin using scripts/decodetree.py

2018-02-17 Thread Richard Henderson
Convert the BREAK instruction to start.

Signed-off-by: Richard Henderson 
---
 target/hppa/translate.c   | 14 +++---
 target/hppa/Makefile.objs |  8 
 target/hppa/insns.decode  | 24 
 3 files changed, 43 insertions(+), 3 deletions(-)
 create mode 100644 target/hppa/insns.decode

diff --git a/target/hppa/translate.c b/target/hppa/translate.c
index f72bc84873..a503ae38d4 100644
--- a/target/hppa/translate.c
+++ b/target/hppa/translate.c
@@ -290,6 +290,9 @@ typedef struct DisasContext {
 bool psw_n_nonzero;
 } DisasContext;
 
+/* Include the auto-generated decoder.  */
+#include "decode.inc.c"
+
 /* We are not using a goto_tb (for whatever reason), but have updated
the iaq (for whatever reason), so don't do it again on exit.  */
 #define DISAS_IAQ_N_UPDATED  DISAS_TARGET_0
@@ -1997,7 +2000,7 @@ static void trans_nop(DisasContext *ctx, uint32_t insn, 
const DisasInsn *di)
 cond_free(>null_cond);
 }
 
-static void trans_break(DisasContext *ctx, uint32_t insn, const DisasInsn *di)
+static void trans_break(DisasContext *ctx, arg_break *a, uint32_t insn)
 {
 nullify_over(ctx);
 gen_excp_iir(ctx, EXCP_BREAK);
@@ -2320,7 +2323,6 @@ static void gen_hlt(DisasContext *ctx, int reset)
 #endif /* !CONFIG_USER_ONLY */
 
 static const DisasInsn table_system[] = {
-{ 0xu, 0xfc001fe0u, trans_break },
 { 0x1820u, 0xffe01fffu, trans_mtsp },
 { 0x1840u, 0xfc00u, trans_mtctl },
 { 0x016018c0u, 0xffe0u, trans_mtsarcm },
@@ -4508,8 +4510,14 @@ static void translate_table_int(DisasContext *ctx, 
uint32_t insn,
 
 static void translate_one(DisasContext *ctx, uint32_t insn)
 {
-uint32_t opc = extract32(insn, 26, 6);
+uint32_t opc;
 
+/* Transition to the auto-generated decoder.  */
+if (decode(ctx, insn)) {
+return;
+}
+
+opc = extract32(insn, 26, 6);
 switch (opc) {
 case 0x00: /* system op */
 translate_table(ctx, insn, table_system);
diff --git a/target/hppa/Makefile.objs b/target/hppa/Makefile.objs
index 3359da5341..174f50a96c 100644
--- a/target/hppa/Makefile.objs
+++ b/target/hppa/Makefile.objs
@@ -1,3 +1,11 @@
 obj-y += translate.o helper.o cpu.o op_helper.o gdbstub.o mem_helper.o
 obj-y += int_helper.o
 obj-$(CONFIG_SOFTMMU) += machine.o
+
+DECODETREE = $(SRC_PATH)/scripts/decodetree.py
+
+target/hppa/decode.inc.c: $(SRC_PATH)/target/hppa/insns.decode $(DECODETREE)
+   $(call quiet-command,\
+ $(PYTHON) $(DECODETREE) -o $@ $<, "GEN", $(TARGET_DIR)$@)
+
+target/hppa/translate.o: target/hppa/decode.inc.c
diff --git a/target/hppa/insns.decode b/target/hppa/insns.decode
new file mode 100644
index 00..6c2d3a3a52
--- /dev/null
+++ b/target/hppa/insns.decode
@@ -0,0 +1,24 @@
+#
+# HPPA instruction decode definitions.
+#
+# Copyright (c) 2018 Richard Henderson 
+#
+# This library is free software; you can redistribute it and/or
+# modify it under the terms of the GNU Lesser General Public
+# License as published by the Free Software Foundation; either
+# version 2 of the License, or (at your option) any later version.
+#
+# This library is distributed in the hope that it will be useful,
+# but WITHOUT ANY WARRANTY; without even the implied warranty of
+# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the GNU
+# Lesser General Public License for more details.
+#
+# You should have received a copy of the GNU Lesser General Public
+# License along with this library; if not, see .
+#
+
+
+# System
+
+
+break  00 - - ---  -
-- 
2.14.3




[Qemu-devel] [PATCH 03/19] target/hppa: Convert move to/from system registers

2018-02-17 Thread Richard Henderson
Signed-off-by: Richard Henderson 
---
 target/hppa/translate.c  | 57 +---
 target/hppa/insns.decode | 15 +
 2 files changed, 40 insertions(+), 32 deletions(-)

diff --git a/target/hppa/translate.c b/target/hppa/translate.c
index a503ae38d4..9b2de2fa2a 100644
--- a/target/hppa/translate.c
+++ b/target/hppa/translate.c
@@ -844,7 +844,7 @@ static unsigned assemble_rc64(uint32_t insn)
 return r2 * 32 + r1 * 4 + r0;
 }
 
-static unsigned assemble_sr3(uint32_t insn)
+static inline unsigned assemble_sr3(uint32_t insn)
 {
 unsigned s2 = extract32(insn, 13, 1);
 unsigned s0 = extract32(insn, 14, 2);
@@ -2015,9 +2015,9 @@ static void trans_sync(DisasContext *ctx, uint32_t insn, 
const DisasInsn *di)
 cond_free(>null_cond);
 }
 
-static void trans_mfia(DisasContext *ctx, uint32_t insn, const DisasInsn *di)
+static void trans_mfia(DisasContext *ctx, arg_mfia *a, uint32_t insn)
 {
-unsigned rt = extract32(insn, 0, 5);
+unsigned rt = a->t;
 TCGv_reg tmp = dest_gpr(ctx, rt);
 tcg_gen_movi_reg(tmp, ctx->iaoq_f);
 save_gpr(ctx, rt, tmp);
@@ -2025,10 +2025,10 @@ static void trans_mfia(DisasContext *ctx, uint32_t 
insn, const DisasInsn *di)
 cond_free(>null_cond);
 }
 
-static void trans_mfsp(DisasContext *ctx, uint32_t insn, const DisasInsn *di)
+static void trans_mfsp(DisasContext *ctx, arg_mfsp *a, uint32_t insn)
 {
-unsigned rt = extract32(insn, 0, 5);
-unsigned rs = assemble_sr3(insn);
+unsigned rt = a->t;
+unsigned rs = a->sp;
 TCGv_i64 t0 = tcg_temp_new_i64();
 TCGv_reg t1 = tcg_temp_new();
 
@@ -2043,16 +2043,16 @@ static void trans_mfsp(DisasContext *ctx, uint32_t 
insn, const DisasInsn *di)
 cond_free(>null_cond);
 }
 
-static void trans_mfctl(DisasContext *ctx, uint32_t insn, const DisasInsn *di)
+static void trans_mfctl(DisasContext *ctx, arg_mfctl *a, uint32_t insn)
 {
-unsigned rt = extract32(insn, 0, 5);
-unsigned ctl = extract32(insn, 21, 5);
+unsigned rt = a->t;
+unsigned ctl = a->r;
 TCGv_reg tmp;
 
 switch (ctl) {
 case CR_SAR:
 #ifdef TARGET_HPPA64
-if (extract32(insn, 14, 1) == 0) {
+if (a->e == 0) {
 /* MFSAR without ,W masks low 5 bits.  */
 tmp = dest_gpr(ctx, rt);
 tcg_gen_andi_reg(tmp, cpu_sar, 31);
@@ -2094,10 +2094,10 @@ static void trans_mfctl(DisasContext *ctx, uint32_t 
insn, const DisasInsn *di)
 cond_free(>null_cond);
 }
 
-static void trans_mtsp(DisasContext *ctx, uint32_t insn, const DisasInsn *di)
+static void trans_mtsp(DisasContext *ctx, arg_mtsp *a, uint32_t insn)
 {
-unsigned rr = extract32(insn, 16, 5);
-unsigned rs = assemble_sr3(insn);
+unsigned rr = a->r;
+unsigned rs = a->sp;
 TCGv_i64 t64;
 
 if (rs >= 5) {
@@ -2120,11 +2120,10 @@ static void trans_mtsp(DisasContext *ctx, uint32_t 
insn, const DisasInsn *di)
 nullify_end(ctx);
 }
 
-static void trans_mtctl(DisasContext *ctx, uint32_t insn, const DisasInsn *di)
+static void trans_mtctl(DisasContext *ctx, arg_mtctl *a, uint32_t insn)
 {
-unsigned rin = extract32(insn, 16, 5);
-unsigned ctl = extract32(insn, 21, 5);
-TCGv_reg reg = load_gpr(ctx, rin);
+unsigned ctl = a->t;
+TCGv_reg reg = load_gpr(ctx, a->r);
 TCGv_reg tmp;
 
 if (ctl == CR_SAR) {
@@ -2176,12 +2175,11 @@ static void trans_mtctl(DisasContext *ctx, uint32_t 
insn, const DisasInsn *di)
 #endif
 }
 
-static void trans_mtsarcm(DisasContext *ctx, uint32_t insn, const DisasInsn 
*di)
+static void trans_mtsarcm(DisasContext *ctx, arg_mtsarcm *a, uint32_t insn)
 {
-unsigned rin = extract32(insn, 16, 5);
 TCGv_reg tmp = tcg_temp_new();
 
-tcg_gen_not_reg(tmp, load_gpr(ctx, rin));
+tcg_gen_not_reg(tmp, load_gpr(ctx, a->r));
 tcg_gen_andi_reg(tmp, tmp, TARGET_REGISTER_BITS - 1);
 save_or_nullify(ctx, cpu_sar, tmp);
 tcg_temp_free(tmp);
@@ -2267,24 +2265,26 @@ static void trans_ssm(DisasContext *ctx, uint32_t insn, 
const DisasInsn *di)
 ctx->base.is_jmp = DISAS_IAQ_N_STALE_EXIT;
 nullify_end(ctx);
 }
+#endif /* !CONFIG_USER_ONLY */
 
-static void trans_mtsm(DisasContext *ctx, uint32_t insn, const DisasInsn *di)
+static void trans_mtsm(DisasContext *ctx, arg_mtsm *a, uint32_t insn)
 {
-unsigned rr = extract32(insn, 16, 5);
-TCGv_reg tmp, reg;
-
 CHECK_MOST_PRIVILEGED(EXCP_PRIV_OPR);
+#ifndef CONFIG_USER_ONLY
+TCGv_reg tmp, reg;
 nullify_over(ctx);
 
-reg = load_gpr(ctx, rr);
+reg = load_gpr(ctx, a->r);
 tmp = get_temp(ctx);
 gen_helper_swap_system_mask(tmp, cpu_env, reg);
 
 /* Exit the TB to recognize new interrupts.  */
 ctx->base.is_jmp = DISAS_IAQ_N_STALE_EXIT;
 nullify_end(ctx);
+#endif
 }
 
+#ifndef CONFIG_USER_ONLY
 static void trans_rfi(DisasContext *ctx, uint32_t insn, const DisasInsn *di)
 {
 unsigned comp = extract32(insn, 5, 4);
@@ -2323,19 +2323,12 @@ static void gen_hlt(DisasContext *ctx, 

[Qemu-devel] [PATCH 04/19] target/hppa: Convert remainder of system insns

2018-02-17 Thread Richard Henderson
Signed-off-by: Richard Henderson 
---
 target/hppa/translate.c  | 92 ++--
 target/hppa/insns.decode | 12 +++
 2 files changed, 55 insertions(+), 49 deletions(-)

diff --git a/target/hppa/translate.c b/target/hppa/translate.c
index 9b2de2fa2a..ae5969be0b 100644
--- a/target/hppa/translate.c
+++ b/target/hppa/translate.c
@@ -290,6 +290,18 @@ typedef struct DisasContext {
 bool psw_n_nonzero;
 } DisasContext;
 
+/* Note that ssm/rsm instructions number PSW_W and PSW_E differently.  */
+static int expand_sm_imm(int val)
+{
+if (val & PSW_SM_E) {
+val = (val & ~PSW_SM_E) | PSW_E;
+}
+if (val & PSW_SM_W) {
+val = (val & ~PSW_SM_W) | PSW_W;
+}
+return val;
+}
+
 /* Include the auto-generated decoder.  */
 #include "decode.inc.c"
 
@@ -2007,7 +2019,7 @@ static void trans_break(DisasContext *ctx, arg_break *a, 
uint32_t insn)
 nullify_end(ctx);
 }
 
-static void trans_sync(DisasContext *ctx, uint32_t insn, const DisasInsn *di)
+static void trans_sync(DisasContext *ctx, arg_sync *a, uint32_t insn)
 {
 /* No point in nullifying the memory barrier.  */
 tcg_gen_mb(TCG_BAR_SC | TCG_MO_ALL);
@@ -2187,20 +2199,18 @@ static void trans_mtsarcm(DisasContext *ctx, 
arg_mtsarcm *a, uint32_t insn)
 cond_free(>null_cond);
 }
 
-static void trans_ldsid(DisasContext *ctx, uint32_t insn, const DisasInsn *di)
+static void trans_ldsid(DisasContext *ctx, arg_ldsid *a, uint32_t insn)
 {
-unsigned rt = extract32(insn, 0, 5);
+unsigned rt = a->t;
 TCGv_reg dest = dest_gpr(ctx, rt);
 
 #ifdef CONFIG_USER_ONLY
 /* We don't implement space registers in user mode. */
 tcg_gen_movi_reg(dest, 0);
 #else
-unsigned rb = extract32(insn, 21, 5);
-unsigned sp = extract32(insn, 14, 2);
 TCGv_i64 t0 = tcg_temp_new_i64();
 
-tcg_gen_mov_i64(t0, space_select(ctx, sp, load_gpr(ctx, rb)));
+tcg_gen_mov_i64(t0, space_select(ctx, a->sp, load_gpr(ctx, a->b)));
 tcg_gen_shri_i64(t0, t0, 32);
 tcg_gen_trunc_i64_reg(dest, t0);
 
@@ -2211,28 +2221,14 @@ static void trans_ldsid(DisasContext *ctx, uint32_t 
insn, const DisasInsn *di)
 cond_free(>null_cond);
 }
 
+static void trans_rsm(DisasContext *ctx, arg_rsm *a, uint32_t insn)
+{
+CHECK_MOST_PRIVILEGED(EXCP_PRIV_OPR);
 #ifndef CONFIG_USER_ONLY
-/* Note that ssm/rsm instructions number PSW_W and PSW_E differently.  */
-static target_ureg extract_sm_imm(uint32_t insn)
-{
-target_ureg val = extract32(insn, 16, 10);
-
-if (val & PSW_SM_E) {
-val = (val & ~PSW_SM_E) | PSW_E;
-}
-if (val & PSW_SM_W) {
-val = (val & ~PSW_SM_W) | PSW_W;
-}
-return val;
-}
-
-static void trans_rsm(DisasContext *ctx, uint32_t insn, const DisasInsn *di)
-{
-unsigned rt = extract32(insn, 0, 5);
-target_ureg sm = extract_sm_imm(insn);
+unsigned rt = a->t;
+target_ureg sm = a->i;
 TCGv_reg tmp;
 
-CHECK_MOST_PRIVILEGED(EXCP_PRIV_OPR);
 nullify_over(ctx);
 
 tmp = get_temp(ctx);
@@ -2244,15 +2240,17 @@ static void trans_rsm(DisasContext *ctx, uint32_t insn, 
const DisasInsn *di)
 /* Exit the TB to recognize new interrupts, e.g. PSW_M.  */
 ctx->base.is_jmp = DISAS_IAQ_N_STALE_EXIT;
 nullify_end(ctx);
+#endif
 }
 
-static void trans_ssm(DisasContext *ctx, uint32_t insn, const DisasInsn *di)
+static void trans_ssm(DisasContext *ctx, arg_ssm *a, uint32_t insn)
 {
-unsigned rt = extract32(insn, 0, 5);
-target_ureg sm = extract_sm_imm(insn);
+CHECK_MOST_PRIVILEGED(EXCP_PRIV_OPR);
+#ifndef CONFIG_USER_ONLY
+unsigned rt = a->t;
+target_ureg sm = a->i;
 TCGv_reg tmp;
 
-CHECK_MOST_PRIVILEGED(EXCP_PRIV_OPR);
 nullify_over(ctx);
 
 tmp = get_temp(ctx);
@@ -2264,8 +2262,8 @@ static void trans_ssm(DisasContext *ctx, uint32_t insn, 
const DisasInsn *di)
 /* Exit the TB to recognize new interrupts, e.g. PSW_I.  */
 ctx->base.is_jmp = DISAS_IAQ_N_STALE_EXIT;
 nullify_end(ctx);
+#endif
 }
-#endif /* !CONFIG_USER_ONLY */
 
 static void trans_mtsm(DisasContext *ctx, arg_mtsm *a, uint32_t insn)
 {
@@ -2284,15 +2282,13 @@ static void trans_mtsm(DisasContext *ctx, arg_mtsm *a, 
uint32_t insn)
 #endif
 }
 
-#ifndef CONFIG_USER_ONLY
-static void trans_rfi(DisasContext *ctx, uint32_t insn, const DisasInsn *di)
+static void do_rfi(DisasContext *ctx, bool rfi_r)
 {
-unsigned comp = extract32(insn, 5, 4);
-
 CHECK_MOST_PRIVILEGED(EXCP_PRIV_OPR);
+#ifndef CONFIG_USER_ONLY
 nullify_over(ctx);
 
-if (comp == 5) {
+if (rfi_r) {
 gen_helper_rfi_r(cpu_env);
 } else {
 gen_helper_rfi(cpu_env);
@@ -2306,8 +2302,20 @@ static void trans_rfi(DisasContext *ctx, uint32_t insn, 
const DisasInsn *di)
 ctx->base.is_jmp = DISAS_NORETURN;
 
 nullify_end(ctx);
+#endif
 }
 
+static void trans_rfi(DisasContext *ctx, arg_rfi *a, uint32_t insn)
+{
+do_rfi(ctx, false);
+}
+
+static void trans_rfi_r(DisasContext *ctx, 

[Qemu-devel] [PATCH 00/19] target/hppa: Convert to decodetree.py

2018-02-17 Thread Richard Henderson
The existing hppa backend uses a lot of mask/compare tables
to do decoding beyond the major opcode.  Converting the port
to the autogenerator makes things lots easier to read.


r~


Richard Henderson (19):
  target/hppa: Use DisasContextBase.is_jmp
  target/hppa: Begin using scripts/decodetree.py
  target/hppa: Convert move to/from system registers
  target/hppa: Convert remainder of system insns
  target/hppa: Unify specializations of OR
  target/hppa: Convert memory management insns
  target/hppa: Convert arithmetic/logical insns
  target/hppa: Convert indexed memory insns
  target/hppa: Convert fp multiply-add
  target/hppa: Convert conditional branches
  target/hppa: Convert shift, extract, deposit insns
  target/hppa: Convert direct and indirect branches
  target/hppa: Convert arithmetic immediate insns
  target/hppa: Convert offset memory insns
  target/hppa: Convert fp indexed memory insns
  target/hppa: Convert halt/reset insns
  target/hppa: Convert fp fused multiply-add insns
  target/hppa: Convert fp operate insns
  target/hppa: Merge translate_one into hppa_tr_translate_insn

 target/hppa/translate.c   | 3186 ++---
 target/hppa/Makefile.objs |8 +
 target/hppa/insns.decode  |  525 
 3 files changed, 1781 insertions(+), 1938 deletions(-)
 create mode 100644 target/hppa/insns.decode

-- 
2.14.3




[Qemu-devel] [PATCH 2/2] hw/mips/boston: Enable pch_gbe ethernet controller

2018-02-17 Thread Paul Burton
Enable CONFIG_PCH_GBE_PCI in mips64el-softmmu.mak (currently the only
default config to enable Boston board support) and create the pch_gbe
device when using the Boston board.

This provides the board with an ethernet controller matching that found
on real Boston boards as part of the Intel EG20T Platform Controller
Hub, and allows standard Boston Linux kernels to have network access.

This is most easily tested using the downstream linux-mti kernels at the
moment, until MIPS support for the Linux pch_gbe driver is upstream. For
example, presuming U-Boot's mkimage tool is present in your $PATH, this
should be sufficient to boot Linux & see it obtain an IP address using
the emulated pch_gbe device:

  $ git clone git://git.linux-mips.org/pub/scm/linux-mti.git -b eng
  $ cd linux-mti
  $ make ARCH=mips 64r6el_defconfig
  $ make ARCH=mips CROSS_COMPILE=/path/to/compiler/bin/mips-linux-gnu-
  $ qemu-system-mips64el \
  -M boston -cpu I6400 \
  -kernel arch/mips/boot/vmlinux.gz.itb \
  -serial stdio -append "ip=dhcp"

Signed-off-by: Paul Burton 
Cc: Aurelien Jarno 
Cc: Yongbok Kim 

---

 default-configs/mips64el-softmmu.mak | 1 +
 hw/mips/boston.c | 8 +++-
 2 files changed, 8 insertions(+), 1 deletion(-)

diff --git a/default-configs/mips64el-softmmu.mak 
b/default-configs/mips64el-softmmu.mak
index c2ae313f47..85175ea223 100644
--- a/default-configs/mips64el-softmmu.mak
+++ b/default-configs/mips64el-softmmu.mak
@@ -13,3 +13,4 @@ CONFIG_VT82C686=y
 CONFIG_MIPS_BOSTON=y
 CONFIG_FITLOADER=y
 CONFIG_PCI_XILINX=y
+CONFIG_PCH_GBE_PCI=y
diff --git a/hw/mips/boston.c b/hw/mips/boston.c
index fb23161b33..408977bca1 100644
--- a/hw/mips/boston.c
+++ b/hw/mips/boston.c
@@ -31,6 +31,7 @@
 #include "hw/mips/cps.h"
 #include "hw/mips/cpudevs.h"
 #include "hw/pci-host/xilinx-pcie.h"
+#include "net/net.h"
 #include "qapi/error.h"
 #include "qemu/cutils.h"
 #include "qemu/error-report.h"
@@ -430,7 +431,7 @@ static void boston_mach_init(MachineState *machine)
 MemoryRegion *flash, *ddr, *ddr_low_alias, *lcd, *platreg;
 MemoryRegion *sys_mem = get_system_memory();
 XilinxPCIEHost *pcie2;
-PCIDevice *ahci;
+PCIDevice *ahci, *eth;
 DriveInfo *hd[6];
 Chardev *chr;
 int fw_size, fit_err;
@@ -529,6 +530,11 @@ static void boston_mach_init(MachineState *machine)
 ide_drive_get(hd, ahci_get_num_ports(ahci));
 ahci_ide_create_devs(ahci, hd);
 
+eth = pci_create(_BRIDGE(>root)->sec_bus,
+ PCI_DEVFN(0, 1), "pch_gbe");
+qdev_set_nic_properties(>qdev, _table[0]);
+qdev_init_nofail(>qdev);
+
 if (machine->firmware) {
 fw_size = load_image_targphys(machine->firmware,
   0x1fc0, 4 * M_BYTE);
-- 
2.16.1




[Qemu-devel] [PATCH v9 13/14] hw/arm/virt-acpi-build: Add smmuv3 node in IORT table

2018-02-17 Thread Eric Auger
From: Prem Mallappa 

This patch builds the smmuv3 node in the ACPI IORT table.

The RID space of the root complex, which spans 0x0-0x1
maps to streamid space 0x0-0x1 in smmuv3, which in turn
maps to deviceid space 0x0-0x1 in the ITS group.

The guest must feature the IOMMU probe deferral series
(https://lkml.org/lkml/2017/4/10/214) which fixes streamid
multiple lookup. This bug is not related to the SMMU emulation.

Signed-off-by: Prem Mallappa 
Signed-off-by: Eric Auger 

---

v2 -> v3:
- integrate into the existing IORT table made up of ITS, RC nodes
- take into account vms->smmu
- match linux actbl2.h acpi_iort_smmu_v3 field names
---
 hw/arm/virt-acpi-build.c| 56 +++--
 include/hw/acpi/acpi-defs.h | 15 
 2 files changed, 64 insertions(+), 7 deletions(-)

diff --git a/hw/arm/virt-acpi-build.c b/hw/arm/virt-acpi-build.c
index f7fa795..4b5ad91 100644
--- a/hw/arm/virt-acpi-build.c
+++ b/hw/arm/virt-acpi-build.c
@@ -393,19 +393,26 @@ build_rsdp(GArray *rsdp_table, BIOSLinker *linker, 
unsigned xsdt_tbl_offset)
 }
 
 static void
-build_iort(GArray *table_data, BIOSLinker *linker)
+build_iort(GArray *table_data, BIOSLinker *linker, VirtMachineState *vms)
 {
-int iort_start = table_data->len;
+int nb_nodes, iort_start = table_data->len;
 AcpiIortIdMapping *idmap;
 AcpiIortItsGroup *its;
 AcpiIortTable *iort;
-size_t node_size, iort_length;
+AcpiIortSmmu3 *smmu;
+size_t node_size, iort_length, smmu_offset = 0;
 AcpiIortRC *rc;
 
 iort = acpi_data_push(table_data, sizeof(*iort));
 
+if (vms->iommu) {
+nb_nodes = 3; /* RC, ITS, SMMUv3 */
+} else {
+nb_nodes = 2; /* RC, ITS */
+}
+
 iort_length = sizeof(*iort);
-iort->node_count = cpu_to_le32(2); /* RC and ITS nodes */
+iort->node_count = cpu_to_le32(nb_nodes);
 iort->node_offset = cpu_to_le32(sizeof(*iort));
 
 /* ITS group node */
@@ -418,6 +425,35 @@ build_iort(GArray *table_data, BIOSLinker *linker)
 its->its_count = cpu_to_le32(1);
 its->identifiers[0] = 0; /* MADT translation_id */
 
+if (vms->iommu == VIRT_IOMMU_SMMUV3) {
+int irq =  vms->irqmap[VIRT_SMMU];
+
+/* SMMUv3 node */
+smmu_offset = cpu_to_le32(iort->node_offset + node_size);
+node_size = sizeof(*smmu) + sizeof(*idmap);
+iort_length += node_size;
+smmu = acpi_data_push(table_data, node_size);
+
+
+smmu->type = ACPI_IORT_NODE_SMMU_V3;
+smmu->length = cpu_to_le16(node_size);
+smmu->mapping_count = cpu_to_le32(1);
+smmu->mapping_offset = cpu_to_le32(sizeof(*smmu));
+smmu->base_address = cpu_to_le64(vms->memmap[VIRT_SMMU].base);
+smmu->event_gsiv = cpu_to_le32(irq);
+smmu->pri_gsiv = cpu_to_le32(irq + 1);
+smmu->gerr_gsiv = cpu_to_le32(irq + 2);
+smmu->sync_gsiv = cpu_to_le32(irq + 3);
+
+/* Identity RID mapping covering the whole input RID range */
+idmap = >id_mapping_array[0];
+idmap->input_base = 0;
+idmap->id_count = cpu_to_le32(0x);
+idmap->output_base = 0;
+/* output IORT node is the ITS group node (the first node) */
+idmap->output_reference = cpu_to_le32(iort->node_offset);
+}
+
 /* Root Complex Node */
 node_size = sizeof(*rc) + sizeof(*idmap);
 iort_length += node_size;
@@ -438,8 +474,14 @@ build_iort(GArray *table_data, BIOSLinker *linker)
 idmap->input_base = 0;
 idmap->id_count = cpu_to_le32(0x);
 idmap->output_base = 0;
-/* output IORT node is the ITS group node (the first node) */
-idmap->output_reference = cpu_to_le32(iort->node_offset);
+
+if (vms->iommu) {
+/* output IORT node is the smmuv3 node */
+idmap->output_reference = cpu_to_le32(smmu_offset);
+} else {
+/* output IORT node is the ITS group node (the first node) */
+idmap->output_reference = cpu_to_le32(iort->node_offset);
+}
 
 iort->length = cpu_to_le32(iort_length);
 
@@ -786,7 +828,7 @@ void virt_acpi_build(VirtMachineState *vms, AcpiBuildTables 
*tables)
 
 if (its_class_name() && !vmc->no_its) {
 acpi_add_table(table_offsets, tables_blob);
-build_iort(tables_blob, tables->linker);
+build_iort(tables_blob, tables->linker, vms);
 }
 
 /* XSDT is pointed to by RSDP */
diff --git a/include/hw/acpi/acpi-defs.h b/include/hw/acpi/acpi-defs.h
index 80c8099..068ce28 100644
--- a/include/hw/acpi/acpi-defs.h
+++ b/include/hw/acpi/acpi-defs.h
@@ -700,6 +700,21 @@ struct AcpiIortItsGroup {
 } QEMU_PACKED;
 typedef struct AcpiIortItsGroup AcpiIortItsGroup;
 
+struct AcpiIortSmmu3 {
+ACPI_IORT_NODE_HEADER_DEF
+uint64_t base_address;
+uint32_t flags;
+uint32_t reserved2;
+uint64_t vatos_address;
+uint32_t model;
+uint32_t event_gsiv;
+uint32_t pri_gsiv;
+

[Qemu-devel] [PATCH v9 12/14] hw/arm/virt: Add SMMUv3 to the virt board

2018-02-17 Thread Eric Auger
From: Prem Mallappa 

Add code to instantiate an smmuv3 in virt machine. A new iommu
integer member is introduced in VirtMachineState to store the type
of the iommu in use.

Signed-off-by: Prem Mallappa 
Signed-off-by: Eric Auger 

---
v7 -> v8:
- integer iommu member
- add primary-bus property

v4 -> v5:
- add dma-coherent property

v2 -> v3:
- vbi was removed. Use vms instead
- migrate to new smmu binding format (iommu-map)
- don't use appendprop anymore
- add vms->smmu and guard instantiation with this latter
- interrupts type changed to edge

Conflicts:
hw/arm/smmuv3.c
---
 hw/arm/virt.c | 64 ++-
 include/hw/arm/virt.h | 10 
 2 files changed, 73 insertions(+), 1 deletion(-)

diff --git a/hw/arm/virt.c b/hw/arm/virt.c
index dbb3c80..e9dca0d 100644
--- a/hw/arm/virt.c
+++ b/hw/arm/virt.c
@@ -58,6 +58,7 @@
 #include "hw/smbios/smbios.h"
 #include "qapi/visitor.h"
 #include "standard-headers/linux/input.h"
+#include "hw/arm/smmuv3.h"
 
 #define DEFINE_VIRT_MACHINE_LATEST(major, minor, latest) \
 static void virt_##major##_##minor##_class_init(ObjectClass *oc, \
@@ -141,6 +142,7 @@ static const MemMapEntry a15memmap[] = {
 [VIRT_FW_CFG] = { 0x0902, 0x0018 },
 [VIRT_GPIO] =   { 0x0903, 0x1000 },
 [VIRT_SECURE_UART] ={ 0x0904, 0x1000 },
+[VIRT_SMMU] =   { 0x0905, 0x0002 }, /* 128K, needed */
 [VIRT_MMIO] =   { 0x0a00, 0x0200 },
 /* ...repeating for a total of NUM_VIRTIO_TRANSPORTS, each of that size */
 [VIRT_PLATFORM_BUS] =   { 0x0c00, 0x0200 },
@@ -161,6 +163,7 @@ static const int a15irqmap[] = {
 [VIRT_SECURE_UART] = 8,
 [VIRT_MMIO] = 16, /* ...to 16 + NUM_VIRTIO_TRANSPORTS - 1 */
 [VIRT_GIC_V2M] = 48, /* ...to 48 + NUM_GICV2M_SPIS - 1 */
+[VIRT_SMMU] = 74,/* ...to 74 + NUM_SMMU_IRQS - 1 */
 [VIRT_PLATFORM_BUS] = 112, /* ...to 112 + PLATFORM_BUS_NUM_IRQS -1 */
 };
 
@@ -941,7 +944,57 @@ static void create_pcie_irq_map(const VirtMachineState 
*vms,
0x7   /* PCI irq */);
 }
 
-static void create_pcie(const VirtMachineState *vms, qemu_irq *pic)
+static void create_smmu(const VirtMachineState *vms, qemu_irq *pic,
+PCIBus *bus)
+{
+char *node;
+const char compat[] = "arm,smmu-v3";
+int irq =  vms->irqmap[VIRT_SMMU];
+int i;
+hwaddr base = vms->memmap[VIRT_SMMU].base;
+hwaddr size = vms->memmap[VIRT_SMMU].size;
+const char irq_names[] = "eventq\0priq\0cmdq-sync\0gerror";
+DeviceState *dev;
+
+if (vms->iommu != VIRT_IOMMU_SMMUV3 || !vms->iommu_phandle) {
+return;
+}
+
+dev = qdev_create(NULL, "arm-smmuv3");
+
+object_property_set_link(OBJECT(dev), OBJECT(bus), "primary-bus",
+ _abort);
+qdev_init_nofail(dev);
+sysbus_mmio_map(SYS_BUS_DEVICE(dev), 0, base);
+for (i = 0; i < NUM_SMMU_IRQS; i++) {
+sysbus_connect_irq(SYS_BUS_DEVICE(dev), i, pic[irq + i]);
+}
+
+node = g_strdup_printf("/smmuv3@%" PRIx64, base);
+qemu_fdt_add_subnode(vms->fdt, node);
+qemu_fdt_setprop(vms->fdt, node, "compatible", compat, sizeof(compat));
+qemu_fdt_setprop_sized_cells(vms->fdt, node, "reg", 2, base, 2, size);
+
+qemu_fdt_setprop_cells(vms->fdt, node, "interrupts",
+GIC_FDT_IRQ_TYPE_SPI, irq, GIC_FDT_IRQ_FLAGS_EDGE_LO_HI,
+GIC_FDT_IRQ_TYPE_SPI, irq + 1, GIC_FDT_IRQ_FLAGS_EDGE_LO_HI,
+GIC_FDT_IRQ_TYPE_SPI, irq + 2, GIC_FDT_IRQ_FLAGS_EDGE_LO_HI,
+GIC_FDT_IRQ_TYPE_SPI, irq + 3, GIC_FDT_IRQ_FLAGS_EDGE_LO_HI);
+
+qemu_fdt_setprop(vms->fdt, node, "interrupt-names", irq_names,
+ sizeof(irq_names));
+
+qemu_fdt_setprop_cell(vms->fdt, node, "clocks", vms->clock_phandle);
+qemu_fdt_setprop_string(vms->fdt, node, "clock-names", "apb_pclk");
+qemu_fdt_setprop(vms->fdt, node, "dma-coherent", NULL, 0);
+
+qemu_fdt_setprop_cell(vms->fdt, node, "#iommu-cells", 1);
+
+qemu_fdt_setprop_cell(vms->fdt, node, "phandle", vms->iommu_phandle);
+g_free(node);
+}
+
+static void create_pcie(VirtMachineState *vms, qemu_irq *pic)
 {
 hwaddr base_mmio = vms->memmap[VIRT_PCIE_MMIO].base;
 hwaddr size_mmio = vms->memmap[VIRT_PCIE_MMIO].size;
@@ -1054,6 +1107,15 @@ static void create_pcie(const VirtMachineState *vms, 
qemu_irq *pic)
 qemu_fdt_setprop_cell(vms->fdt, nodename, "#interrupt-cells", 1);
 create_pcie_irq_map(vms, vms->gic_phandle, irq, nodename);
 
+if (vms->iommu) {
+vms->iommu_phandle = qemu_fdt_alloc_phandle(vms->fdt);
+
+create_smmu(vms, pic, pci->bus);
+
+qemu_fdt_setprop_cells(vms->fdt, nodename, "iommu-map",
+   0x0, vms->iommu_phandle, 0x0, 0x1);
+}
+
 g_free(nodename);
 }
 

[Qemu-devel] [PATCH v9 09/14] hw/arm/smmuv3: Implement translate callback

2018-02-17 Thread Eric Auger
This patch implements the IOMMU Memory Region translate()
callback. Most of the code relates to the translation
configuration decoding and check (STE, CD).

Signed-off-by: Eric Auger 

---
v8 -> v9:
- use SMMU_EVENT_STRING macro
- get rid of last erro_report's
- decode asid
- handle config abort before ptw
- add 64-bit single-copy atomic comment

v7 -> v8:
- use address_space_rw
- s/Ste/STE, s/Cd/CD
- use dma_memory_read
- remove everything related to stage 2
- collect data for both TTx
- renamings
- pass the event handle all along the config decoding path
- decode tbi, ars
---
 hw/arm/smmuv3-internal.h | 146 
 hw/arm/smmuv3.c  | 341 +++
 hw/arm/trace-events  |   9 ++
 3 files changed, 496 insertions(+)

diff --git a/hw/arm/smmuv3-internal.h b/hw/arm/smmuv3-internal.h
index 3929f69..b203426 100644
--- a/hw/arm/smmuv3-internal.h
+++ b/hw/arm/smmuv3-internal.h
@@ -462,4 +462,150 @@ typedef struct SMMUEventInfo {
 
 void smmuv3_record_event(SMMUv3State *s, SMMUEventInfo *event);
 
+/* Configuration Data */
+
+/* STE Level 1 Descriptor */
+typedef struct STEDesc {
+uint32_t word[2];
+} STEDesc;
+
+/* CD Level 1 Descriptor */
+typedef struct CDDesc {
+uint32_t word[2];
+} CDDesc;
+
+/* Stream Table Entry(STE) */
+typedef struct STE {
+uint32_t word[16];
+} STE;
+
+/* Context Descriptor(CD) */
+typedef struct CD {
+uint32_t word[16];
+} CD;
+
+/* STE fields */
+
+#define STE_VALID(x)   extract32((x)->word[0], 0, 1) /* 0 */
+
+#define STE_CONFIG(x)  extract32((x)->word[0], 1, 3)
+#define STE_CFG_S1_ENABLED(config) (config & 0x1)
+#define STE_CFG_S2_ENABLED(config) (config & 0x2)
+#define STE_CFG_ABORT(config)  (!(config & 0x4))
+#define STE_CFG_BYPASS(config) (config == 0x4)
+
+#define STE_S1FMT(x)   extract32((x)->word[0], 4 , 2)
+#define STE_S1CDMAX(x) extract32((x)->word[1], 27, 5)
+#define STE_EATS(x)extract32((x)->word[2], 28, 2)
+#define STE_STRW(x)extract32((x)->word[2], 30, 2)
+#define STE_S2VMID(x)  extract32((x)->word[4], 0 , 16)
+#define STE_S2T0SZ(x)  extract32((x)->word[5], 0 , 6)
+#define STE_S2SL0(x)   extract32((x)->word[5], 6 , 2)
+#define STE_S2TG(x)extract32((x)->word[5], 14, 2)
+#define STE_S2PS(x)extract32((x)->word[5], 16, 3)
+#define STE_S2AA64(x)  extract32((x)->word[5], 19, 1)
+#define STE_S2HD(x)extract32((x)->word[5], 24, 1)
+#define STE_S2HA(x)extract32((x)->word[5], 25, 1)
+#define STE_S2S(x) extract32((x)->word[5], 26, 1)
+#define STE_CTXPTR(x)   \
+({  \
+unsigned long addr; \
+addr = (uint64_t)extract32((x)->word[1], 0, 16) << 32;  \
+addr |= (uint64_t)((x)->word[0] & 0xffc0);  \
+addr;   \
+})
+
+#define STE_S2TTB(x)\
+({  \
+unsigned long addr; \
+addr = (uint64_t)extract32((x)->word[7], 0, 16) << 32;  \
+addr |= (uint64_t)((x)->word[6] & 0xfff0);  \
+addr;   \
+})
+
+static inline int oas2bits(int oas_field)
+{
+switch (oas_field) {
+case 0b011:
+return 42;
+case 0b100:
+return 44;
+default:
+return 32 + (1 << oas_field);
+   }
+}
+
+static inline int pa_range(STE *ste)
+{
+int oas_field = MIN(STE_S2PS(ste), SMMU_IDR5_OAS);
+
+if (!STE_S2AA64(ste)) {
+return 40;
+}
+
+return oas2bits(oas_field);
+}
+
+#define MAX_PA(ste) ((1 << pa_range(ste)) - 1)
+
+/* CD fields */
+
+#define CD_VALID(x)   extract32((x)->word[0], 30, 1)
+#define CD_ASID(x)extract32((x)->word[1], 16, 16)
+#define CD_TTB(x, sel)  \
+({  \
+uint64_t hi, lo;\
+hi = extract32((x)->word[(sel) * 2 + 3], 0, 16);\
+hi <<= 32;  \
+lo = (x)->word[(sel) * 2 + 2] & ~0xf;   \
+hi | lo;\
+})
+
+#define CD_TSZ(x, sel)   extract32((x)->word[0], (16 * (sel)) + 0, 6)
+#define CD_TG(x, sel)extract32((x)->word[0], (16 * (sel)) + 6, 2)
+#define CD_EPD(x, sel)   extract32((x)->word[0], (16 * (sel)) + 14, 1)
+#define CD_ENDI(x)   extract32((x)->word[0], 15, 1)
+#define CD_IPS(x)extract32((x)->word[1], 0 , 3)
+#define CD_TBI(x)extract32((x)->word[1], 6 , 2)
+#define CD_S(x)  extract32((x)->word[1], 12, 1)
+#define CD_R(x)  extract32((x)->word[1], 13, 1)
+#define CD_A(x)  extract32((x)->word[1], 14, 1)
+#define CD_AARCH64(x)

[Qemu-devel] [PATCH v9 11/14] target/arm/kvm: Translate the MSI doorbell in kvm_arch_fixup_msi_route

2018-02-17 Thread Eric Auger
In case the MSI is translated by an IOMMU we need to fixup the
MSI route with the translated address.

Signed-off-by: Eric Auger 

---

v5 -> v6:
- use IOMMUMemoryRegionClass API

It is still unclear to me if we need to register an IOMMUNotifier
to handle any change in the MSI doorbell which would occur behind
the scene and would not lead to any call to kvm_arch_fixup_msi_route().
---
 target/arm/kvm.c| 27 +++
 target/arm/trace-events |  3 +++
 2 files changed, 30 insertions(+)

diff --git a/target/arm/kvm.c b/target/arm/kvm.c
index 1219d00..9f5976a 100644
--- a/target/arm/kvm.c
+++ b/target/arm/kvm.c
@@ -20,8 +20,13 @@
 #include "sysemu/kvm.h"
 #include "kvm_arm.h"
 #include "cpu.h"
+#include "trace.h"
 #include "internals.h"
 #include "hw/arm/arm.h"
+#include "hw/pci/pci.h"
+#include "hw/pci/msi.h"
+#include "hw/arm/smmu-common.h"
+#include "hw/arm/smmuv3.h"
 #include "exec/memattrs.h"
 #include "exec/address-spaces.h"
 #include "hw/boards.h"
@@ -666,6 +671,28 @@ int kvm_arm_vgic_probe(void)
 int kvm_arch_fixup_msi_route(struct kvm_irq_routing_entry *route,
  uint64_t address, uint32_t data, PCIDevice *dev)
 {
+AddressSpace *as = pci_device_iommu_address_space(dev);
+IOMMUMemoryRegionClass *imrc;
+IOMMUTLBEntry entry;
+SMMUDevice *sdev;
+
+if (as == _space_memory) {
+return 0;
+}
+
+/* MSI doorbell address is translated by an IOMMU */
+sdev = container_of(as, SMMUDevice, as);
+imrc = IOMMU_MEMORY_REGION_GET_CLASS(>iommu);
+
+entry = imrc->translate(>iommu, address, IOMMU_WO);
+
+route->u.msi.address_lo = entry.translated_addr;
+route->u.msi.address_hi = entry.translated_addr >> 32;
+
+trace_kvm_arm_fixup_msi_route(address, sdev->devfn,
+  sdev->iommu.parent_obj.name,
+  entry.translated_addr);
+
 return 0;
 }
 
diff --git a/target/arm/trace-events b/target/arm/trace-events
index 9e37131..8b3c220 100644
--- a/target/arm/trace-events
+++ b/target/arm/trace-events
@@ -8,3 +8,6 @@ arm_gt_tval_write(int timer, uint64_t value) "gt_tval_write: 
timer %d value 0x%"
 arm_gt_ctl_write(int timer, uint64_t value) "gt_ctl_write: timer %d value 0x%" 
PRIx64
 arm_gt_imask_toggle(int timer, int irqstate) "gt_ctl_write: timer %d IMASK 
toggle, new irqstate %d"
 arm_gt_cntvoff_write(uint64_t value) "gt_cntvoff_write: value 0x%" PRIx64
+
+# target/arm/kvm.c
+kvm_arm_fixup_msi_route(uint64_t iova, uint32_t devid, const char *name, 
uint64_t gpa) "MSI addr = 0x%"PRIx64" is translated for devfn=%d through %s 
into 0x%"PRIx64
-- 
2.5.5




[Qemu-devel] [PATCH v9 06/14] hw/arm/smmuv3: Queue helpers

2018-02-17 Thread Eric Auger
We introduce helpers to read/write into the command and event
circular queues.

smmuv3_write_eventq and smmuv3_cmq_consume will become static
in subsequent patches.

Invalidation commands are not yet dealt with. We do not cache
data that need to be invalidated. This will change with vhost
integration.

Signed-off-by: Eric Auger 

---

v8 -> v9:
- fix CMD_SSID & CMD_ADDR + some renamings
- do cons increment after the execution of the command
- add Q_INCONSISTENT()

v7 -> v8
- use address_space_rw
- helpers inspired from spec
---
 hw/arm/smmuv3-internal.h | 150 +++
 hw/arm/smmuv3.c  | 162 +++
 hw/arm/trace-events  |   4 ++
 3 files changed, 316 insertions(+)

diff --git a/hw/arm/smmuv3-internal.h b/hw/arm/smmuv3-internal.h
index 40b39a1..c0771ce 100644
--- a/hw/arm/smmuv3-internal.h
+++ b/hw/arm/smmuv3-internal.h
@@ -162,4 +162,154 @@ static inline uint64_t smmu_read64(uint64_t r, unsigned 
offset,
 void smmuv3_trigger_irq(SMMUv3State *s, SMMUIrq irq, uint32_t gerror_mask);
 void smmuv3_write_gerrorn(SMMUv3State *s, uint32_t gerrorn);
 
+/* Queue Handling */
+
+#define LOG2SIZE(q)extract64((q)->base, 0, 5)
+#define BASE(q)((q)->base & SMMU_BASE_ADDR_MASK)
+#define WRAP_MASK(q)   (1 << LOG2SIZE(q))
+#define INDEX_MASK(q)  ((1 << LOG2SIZE(q)) - 1)
+#define WRAP_INDEX_MASK(q) ((1 << (LOG2SIZE(q) + 1)) - 1)
+
+#define Q_CONS_ENTRY(q)  (BASE(q) + \
+  (q)->entry_size * ((q)->cons & INDEX_MASK(q)))
+#define Q_PROD_ENTRY(q)  (BASE(q) + \
+  (q)->entry_size * ((q)->prod & INDEX_MASK(q)))
+
+#define Q_CONS(q) ((q)->cons & INDEX_MASK(q))
+#define Q_PROD(q) ((q)->prod & INDEX_MASK(q))
+
+#define Q_CONS_WRAP(q) (((q)->cons & WRAP_MASK(q)) >> LOG2SIZE(q))
+#define Q_PROD_WRAP(q) (((q)->prod & WRAP_MASK(q)) >> LOG2SIZE(q))
+
+#define Q_FULL(q) \
+(q)->cons) & INDEX_MASK(q)) == \
+  (((q)->prod) & INDEX_MASK(q))) && \
+ q)->cons) & WRAP_MASK(q)) != \
+  (((q)->prod) & WRAP_MASK(q
+
+#define Q_EMPTY(q) \
+(q)->cons) & INDEX_MASK(q)) == \
+  (((q)->prod) & INDEX_MASK(q))) && \
+ q)->cons) & WRAP_MASK(q)) == \
+  (((q)->prod) & WRAP_MASK(q
+
+#define Q_INCONSISTENT(q) \
+((q)->prod) & INDEX_MASK(q)) > (((q)->cons) & INDEX_MASK(q))) && \
+q)->prod) & WRAP_MASK(q)) != (((q)->cons) & WRAP_MASK(q || \
+(q)->prod) & INDEX_MASK(q)) < (((q)->cons) & INDEX_MASK(q))) && \
+q)->prod) & WRAP_MASK(q)) == (((q)->cons) & WRAP_MASK(q) \
+
+#define SMMUV3_CMDQ_ENABLED(s) \
+ (FIELD_EX32(s->cr[0], CR0, CMDQEN))
+
+#define SMMUV3_EVENTQ_ENABLED(s) \
+ (FIELD_EX32(s->cr[0], CR0, EVENTQEN))
+
+static inline void smmu_write_cmdq_err(SMMUv3State *s, uint32_t err_type)
+{
+s->cmdq.cons = FIELD_DP32(s->cmdq.cons, CMDQ_CONS, ERR, err_type);
+}
+
+void smmuv3_write_eventq(SMMUv3State *s, Evt *evt);
+
+/* Commands */
+
+enum {
+SMMU_CMD_PREFETCH_CONFIG = 0x01,
+SMMU_CMD_PREFETCH_ADDR,
+SMMU_CMD_CFGI_STE,
+SMMU_CMD_CFGI_STE_RANGE,
+SMMU_CMD_CFGI_CD,
+SMMU_CMD_CFGI_CD_ALL,
+SMMU_CMD_CFGI_ALL,
+SMMU_CMD_TLBI_NH_ALL = 0x10,
+SMMU_CMD_TLBI_NH_ASID,
+SMMU_CMD_TLBI_NH_VA,
+SMMU_CMD_TLBI_NH_VAA,
+SMMU_CMD_TLBI_EL3_ALL= 0x18,
+SMMU_CMD_TLBI_EL3_VA = 0x1a,
+SMMU_CMD_TLBI_EL2_ALL= 0x20,
+SMMU_CMD_TLBI_EL2_ASID,
+SMMU_CMD_TLBI_EL2_VA,
+SMMU_CMD_TLBI_EL2_VAA,  /* 0x23 */
+SMMU_CMD_TLBI_S12_VMALL  = 0x28,
+SMMU_CMD_TLBI_S2_IPA = 0x2a,
+SMMU_CMD_TLBI_NSNH_ALL   = 0x30,
+SMMU_CMD_ATC_INV = 0x40,
+SMMU_CMD_PRI_RESP,
+SMMU_CMD_RESUME  = 0x44,
+SMMU_CMD_STALL_TERM,
+SMMU_CMD_SYNC,  /* 0x46 */
+};
+
+static const char *cmd_stringify[] = {
+[SMMU_CMD_PREFETCH_CONFIG] = "SMMU_CMD_PREFETCH_CONFIG",
+[SMMU_CMD_PREFETCH_ADDR]   = "SMMU_CMD_PREFETCH_ADDR",
+[SMMU_CMD_CFGI_STE]= "SMMU_CMD_CFGI_STE",
+[SMMU_CMD_CFGI_STE_RANGE]  = "SMMU_CMD_CFGI_STE_RANGE",
+[SMMU_CMD_CFGI_CD] = "SMMU_CMD_CFGI_CD",
+[SMMU_CMD_CFGI_CD_ALL] = "SMMU_CMD_CFGI_CD_ALL",
+[SMMU_CMD_CFGI_ALL]= "SMMU_CMD_CFGI_ALL",
+[SMMU_CMD_TLBI_NH_ALL] = "SMMU_CMD_TLBI_NH_ALL",
+[SMMU_CMD_TLBI_NH_ASID]= "SMMU_CMD_TLBI_NH_ASID",
+[SMMU_CMD_TLBI_NH_VA]  = "SMMU_CMD_TLBI_NH_VA",
+[SMMU_CMD_TLBI_NH_VAA] = "SMMU_CMD_TLBI_NH_VAA",
+[SMMU_CMD_TLBI_EL3_ALL]= "SMMU_CMD_TLBI_EL3_ALL",
+[SMMU_CMD_TLBI_EL3_VA] = "SMMU_CMD_TLBI_EL3_VA",
+[SMMU_CMD_TLBI_EL2_ALL]= "SMMU_CMD_TLBI_EL2_ALL",
+[SMMU_CMD_TLBI_EL2_ASID]   = "SMMU_CMD_TLBI_EL2_ASID",
+[SMMU_CMD_TLBI_EL2_VA] = "SMMU_CMD_TLBI_EL2_VA",
+[SMMU_CMD_TLBI_EL2_VAA]= "SMMU_CMD_TLBI_EL2_VAA",
+[SMMU_CMD_TLBI_S12_VMALL]  = "SMMU_CMD_TLBI_S12_VMALL",
+[SMMU_CMD_TLBI_S2_IPA] = "SMMU_CMD_TLBI_S2_IPA",
+

[Qemu-devel] [PATCH v9 07/14] hw/arm/smmuv3: Implement MMIO write operations

2018-02-17 Thread Eric Auger
Now we have relevant helpers for queue and irq
management, let's implement MMIO write operations.

Signed-off-by: Eric Auger 

---

v7 -> v8:
- precise in the commit message invalidation commands
  are not yet treated.
- use new queue helpers
- do not decode unhandled commands at this stage
---
 hw/arm/smmuv3-internal.h |  24 +++---
 hw/arm/smmuv3.c  | 111 +--
 hw/arm/trace-events  |   6 +++
 3 files changed, 132 insertions(+), 9 deletions(-)

diff --git a/hw/arm/smmuv3-internal.h b/hw/arm/smmuv3-internal.h
index c0771ce..5af97ae 100644
--- a/hw/arm/smmuv3-internal.h
+++ b/hw/arm/smmuv3-internal.h
@@ -152,6 +152,25 @@ static inline uint64_t smmu_read64(uint64_t r, unsigned 
offset,
 return extract64(r, offset << 3, 32);
 }
 
+static inline void smmu_write64(uint64_t *r, unsigned offset,
+unsigned size, uint64_t value)
+{
+if (size == 8 && !offset) {
+*r  = value;
+}
+
+/* 32 bit access */
+
+if (offset && offset != 4)  {
+qemu_log_mask(LOG_GUEST_ERROR,
+  "SMMUv3 MMIO write: bad offset/size %u/%u\n",
+  offset, size);
+return ;
+}
+
+*r = deposit64(*r, offset << 3, 32, value);
+}
+
 /* Interrupts */
 
 #define smmuv3_eventq_irq_enabled(s)   \
@@ -159,9 +178,6 @@ static inline uint64_t smmu_read64(uint64_t r, unsigned 
offset,
 #define smmuv3_gerror_irq_enabled(s)  \
 (FIELD_EX32(s->irq_ctrl, IRQ_CTRL, GERROR_IRQEN))
 
-void smmuv3_trigger_irq(SMMUv3State *s, SMMUIrq irq, uint32_t gerror_mask);
-void smmuv3_write_gerrorn(SMMUv3State *s, uint32_t gerrorn);
-
 /* Queue Handling */
 
 #define LOG2SIZE(q)extract64((q)->base, 0, 5)
@@ -310,6 +326,4 @@ enum { /* Command completion notification */
 addr; \
 })
 
-int smmuv3_cmdq_consume(SMMUv3State *s);
-
 #endif
diff --git a/hw/arm/smmuv3.c b/hw/arm/smmuv3.c
index 0b57215..fcfdbb0 100644
--- a/hw/arm/smmuv3.c
+++ b/hw/arm/smmuv3.c
@@ -37,7 +37,8 @@
  * @irq: irq type
  * @gerror_mask: mask of gerrors to toggle (relevant if @irq is GERROR)
  */
-void smmuv3_trigger_irq(SMMUv3State *s, SMMUIrq irq, uint32_t gerror_mask)
+static void smmuv3_trigger_irq(SMMUv3State *s, SMMUIrq irq,
+   uint32_t gerror_mask)
 {
 
 bool pulse = false;
@@ -75,7 +76,7 @@ void smmuv3_trigger_irq(SMMUv3State *s, SMMUIrq irq, uint32_t 
gerror_mask)
 }
 }
 
-void smmuv3_write_gerrorn(SMMUv3State *s, uint32_t new_gerrorn)
+static void smmuv3_write_gerrorn(SMMUv3State *s, uint32_t new_gerrorn)
 {
 uint32_t pending = s->gerror ^ s->gerrorn;
 uint32_t toggled = s->gerrorn ^ new_gerrorn;
@@ -199,7 +200,7 @@ static void smmuv3_init_regs(SMMUv3State *s)
 s->sid_split = 0;
 }
 
-int smmuv3_cmdq_consume(SMMUv3State *s)
+static int smmuv3_cmdq_consume(SMMUv3State *s)
 {
 SMMUCmdError cmd_error = SMMU_CERROR_NONE;
 SMMUQueue *q = >cmdq;
@@ -298,7 +299,109 @@ int smmuv3_cmdq_consume(SMMUv3State *s)
 static void smmu_write_mmio(void *opaque, hwaddr addr,
 uint64_t val, unsigned size)
 {
-/* not yet implemented */
+SMMUState *sys = opaque;
+SMMUv3State *s = ARM_SMMUV3(sys);
+
+/* CONSTRAINED UNPREDICTABLE choice to have page0/1 be exact aliases */
+addr &= ~0x1;
+
+if (size != 4 && size != 8) {
+qemu_log_mask(LOG_GUEST_ERROR,
+  "SMMUv3 MMIO write: bad size %u\n", size);
+}
+
+trace_smmuv3_write_mmio(addr, val, size);
+
+switch (addr) {
+case A_CR0:
+s->cr[0] = val;
+s->cr0ack = val;
+/* in case the command queue has been enabled */
+smmuv3_cmdq_consume(s);
+return;
+case A_CR1:
+s->cr[1] = val;
+return;
+case A_CR2:
+s->cr[2] = val;
+return;
+case A_IRQ_CTRL:
+s->irq_ctrl = val;
+return;
+case A_GERRORN:
+smmuv3_write_gerrorn(s, val);
+/*
+ * By acknowledging the CMDQ_ERR, SW may notify cmds can
+ * be processed again
+ */
+smmuv3_cmdq_consume(s);
+return;
+case A_GERROR_IRQ_CFG0: /* 64b */
+smmu_write64(>gerror_irq_cfg0, 0, size, val);
+return;
+case A_GERROR_IRQ_CFG0 + 4:
+smmu_write64(>gerror_irq_cfg0, 4, size, val);
+return;
+case A_GERROR_IRQ_CFG1:
+s->gerror_irq_cfg1 = val;
+return;
+case A_GERROR_IRQ_CFG2:
+s->gerror_irq_cfg2 = val;
+return;
+case A_STRTAB_BASE: /* 64b */
+smmu_write64(>strtab_base, 0, size, val);
+return;
+case A_STRTAB_BASE + 4:
+smmu_write64(>strtab_base, 4, size, val);
+return;
+case A_STRTAB_BASE_CFG:
+s->strtab_base_cfg = val;
+if (FIELD_EX32(val, STRTAB_BASE_CFG, FMT) == 1) {
+s->sid_split = 

[Qemu-devel] [PATCH v9 04/14] hw/arm/smmuv3: Skeleton

2018-02-17 Thread Eric Auger
From: Prem Mallappa 

This patch implements a skeleton for the smmuv3 device.
Datatypes and register definitions are introduced. The MMIO
region, the interrupts and the queue are initialized.

Only the MMIO read operation is implemented here.

Signed-off-by: Prem Mallappa 
Signed-off-by: Eric Auger 

---
v8 -> v9:
- add #include "qemu/log.h"
- add parent_reset

v7 -> v8:
- remove __smmu_data structs
- revisit struct SMMUQueue
- do not advertise stage 2 support anymore
- use the register definition API and get rid of REG array
- get read of queue structs

v6 -> v7:
- split into several patches

v5 -> v6:
- Use IOMMUMemoryregion
- regs become uint32_t and fix 64b MMIO access (.impl)
- trace_smmuv3_write/read_mmio take the size param

v4 -> v5:
- change smmuv3_translate proto (IOMMUAccessFlags flag)
- has_stagex replaced by is_ste_stagex
- smmu_cfg_populate removed
- added smmuv3_decode_config and reworked error management
- remwork the naming of IOMMU mrs
- fix SMMU_CMDQ_CONS offset

v3 -> v4
- smmu_irq_update
- fix hash key allocation
- set smmu_iommu_ops
- set SMMU_REG_CR0,
- smmuv3_translate: ret.perm not set in bypass mode
- use trace events
- renamed STM2U64 into L1STD_L2PTR and STMSPAN into L1STD_SPAN
- rework smmu_find_ste
- fix tg2granule in TT0/0b10 corresponds to 16kB

v2 -> v3:
- move creation of include/hw/arm/smmuv3.h to this patch to fix compil issue
- compilation allowed
- fix sbus allocation in smmu_init_pci_iommu
- restructure code into headers
- misc cleanups
---
 hw/arm/Makefile.objs |   2 +-
 hw/arm/smmuv3-internal.h | 155 +
 hw/arm/smmuv3.c  | 348 +++
 hw/arm/trace-events  |   3 +
 include/hw/arm/smmuv3.h  |  91 +
 5 files changed, 598 insertions(+), 1 deletion(-)
 create mode 100644 hw/arm/smmuv3-internal.h
 create mode 100644 hw/arm/smmuv3.c
 create mode 100644 include/hw/arm/smmuv3.h

diff --git a/hw/arm/Makefile.objs b/hw/arm/Makefile.objs
index c84c5ac..676b222 100644
--- a/hw/arm/Makefile.objs
+++ b/hw/arm/Makefile.objs
@@ -20,4 +20,4 @@ obj-$(CONFIG_FSL_IMX6) += fsl-imx6.o sabrelite.o
 obj-$(CONFIG_ASPEED_SOC) += aspeed_soc.o aspeed.o
 obj-$(CONFIG_MPS2) += mps2.o
 obj-$(CONFIG_MSF2) += msf2-soc.o msf2-som.o
-obj-$(CONFIG_ARM_SMMUV3) += smmu-common.o
+obj-$(CONFIG_ARM_SMMUV3) += smmu-common.o smmuv3.o
diff --git a/hw/arm/smmuv3-internal.h b/hw/arm/smmuv3-internal.h
new file mode 100644
index 000..5be8303
--- /dev/null
+++ b/hw/arm/smmuv3-internal.h
@@ -0,0 +1,155 @@
+/*
+ * ARM SMMUv3 support - Internal API
+ *
+ * Copyright (C) 2014-2016 Broadcom Corporation
+ * Copyright (c) 2017 Red Hat, Inc.
+ * Written by Prem Mallappa, Eric Auger
+ *
+ * This program is free software; you can redistribute it and/or modify
+ * it under the terms of the GNU General Public License version 2 as
+ * published by the Free Software Foundation.
+ *
+ * This program is distributed in the hope that it will be useful,
+ * but WITHOUT ANY WARRANTY; without even the implied warranty of
+ * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the
+ * GNU General Public License for more details.
+ *
+ * You should have received a copy of the GNU General Public License along
+ * with this program; if not, see .
+ */
+
+#ifndef HW_ARM_SMMU_V3_INTERNAL_H
+#define HW_ARM_SMMU_V3_INTERNAL_H
+
+#include "qemu/log.h"
+#include "trace.h"
+#include "qemu/error-report.h"
+#include "hw/arm/smmu-common.h"
+
+/* MMIO Registers */
+
+REG32(IDR0,0x0)
+FIELD(IDR0, S1P, 1 , 1)
+FIELD(IDR0, TTF, 2 , 2)
+FIELD(IDR0, COHACC,  4 , 1)
+FIELD(IDR0, ASID16,  12, 1)
+FIELD(IDR0, TTENDIAN,21, 2)
+FIELD(IDR0, STALL_MODEL, 24, 2)
+FIELD(IDR0, TERM_MODEL,  26, 1)
+FIELD(IDR0, STLEVEL, 27, 2)
+
+REG32(IDR1,0x4)
+FIELD(IDR1, SIDSIZE,  0 , 6)
+FIELD(IDR1, EVENTQS,  16, 5)
+FIELD(IDR1, CMDQS,21, 5)
+
+#define SMMU_IDR1_SIDSIZE 16
+
+REG32(IDR2,0x8)
+REG32(IDR3,0xc)
+REG32(IDR4,0x10)
+REG32(IDR5,0x14)
+ FIELD(IDR5, OAS, 0, 3);
+ FIELD(IDR5, GRAN4K,  4, 1);
+ FIELD(IDR5, GRAN16K, 5, 1);
+ FIELD(IDR5, GRAN64K, 6, 1);
+
+#define SMMU_IDR5_OAS 4
+
+REG32(IIDR,0x1c)
+REG32(CR0, 0x20)
+FIELD(CR0, SMMU_ENABLE,   0, 1)
+FIELD(CR0, EVENTQEN,  2, 1)
+FIELD(CR0, CMDQEN,3, 1)
+
+REG32(CR0ACK,  0x24)
+REG32(CR1, 0x28)
+REG32(CR2, 0x2c)
+REG32(STATUSR, 0x40)
+REG32(IRQ_CTRL,0x50)
+FIELD(IRQ_CTRL, GERROR_IRQEN,0, 1)
+FIELD(IRQ_CTRL, PRI_IRQEN,   1, 1)
+FIELD(IRQ_CTRL, EVENTQ_IRQEN,2, 1)
+
+REG32(IRQ_CTRL_ACK,0x54)
+REG32(GERROR,  0x60)
+

[Qemu-devel] [PATCH v9 00/14] ARM SMMUv3 Emulation Support

2018-02-17 Thread Eric Auger
This series implements the emulation code for ARM SMMUv3.

SMMUv3 gets instantiated by adding ",iommu=smmuv3" to the virt
machine option.

VHOST integration will be handled in a separate series. VFIO
integration is not targeted at the moment. Only stage 1 and
AArch64 PTW are supported.

Main changes since v8:
- fix mingw compilation (qemu/log.h)
- put gpl v2 license on all files to respect initial license
- change proto of smmu_ptw* to clarify inputs/outputs and
  prepare for iotlb emulation
- fix hash table lookup
- cleanup access type handling during ptw
- cleanup reset infra (parent_reset)
- replace some inline functions by macros
- fix some CMD fields
- increment cmdq cons only after cmd execution
- replace some remaining error_report by qemu_log_mask

Best Regards

Eric

This series can be found at:
v9: https://github.com/eauger/qemu/tree/v2.11.0-SMMU-v9
Previous version at:
v8: https://github.com/eauger/qemu/tree/v2.11.0-SMMU-v8

History:

v8 -> v9:
- see above description

v7 -> v8:
Took into account Peter's comments:
- revisit queue data structures
- use registerfields.h and got rid of reg array
- use dma_memory_read for all descriptor fetches
- got rid of page table walk for an iova range and
  implemented standard page table walk for single IOVA
- revisit event data structure
- report events in many more situations and pass the event
  handle all along the decode and ptw phases
- fix gerror/gerron computations
- completely got rid of stage2 decoding
- use a machine option for instantiation
- get rid of VFIO integration
- get rid of VHOST integration (this will be added in a
- abort in case vhost/vfio notifiers get detected
  second step together with TLB emulation)
- Tested migration
- fixed TTBR index computation (issue reported by Tomasz)

v6 -> v7:
- DPDK testpmd now running on guest with 2 assigned VFs
- Changed the instantiation method: add the following option to
  the QEMU command line
  -device smmu # for virtio/vhost use cases
  -device smmu,caching-mode # for vfio use cases (based on [1])
- splitted the series into smaller patches to allow the review
- the VFIO integration based on "ltlbi-on-map" smmuv3 driver
  is isolated from the rest: last 2 patches, not for upstream.
  This is shipped for testing/bench until a better solution is found.
- Reworked permission flag checks and event generation

v5 -> v6:
- Rebase on 2.10 and IOMMUMemoryRegion
- add ACPI TLBI_ON_MAP support (VFIO integration also works in
  ACPI mode)
- fix block replay
- handle implementation defined SMMU_CMD_TLBI_NH_VA_AM cmd
  (goes along with TLBI_ON_MAP FW quirk)
- replay systematically unmap the whole range first
- smmuv3_map_hook does not unmap anymore and the unmap is done
  before the replay
- add and use smmuv3_context_device_invalidate instead of
  blindly replaying everything

v4 -> v5:
- initial_level now part of SMMUTransCfg
- smmu_page_walk_64 takes into account the max input size
- implement sys->iommu_ops.replay and sys->iommu_ops.notify_flag_changed
- smmuv3_translate: bug fix: don't walk on bypass
- smmu_update_qreg: fix PROD index update
- I did not yet address Peter's comments as the code is not mature enough
  to be split into sub patches.

v3 -> v4 [Eric]:
- page table walk rewritten to allow scan of the page table within a
  range of IOVA. This prepares for VFIO integration and replay.
- configuration parsing partially reworked.
- do not advertise unsupported/untested features: S2, S1 + S2, HYP,
  PRI, ATS, ..
- added ACPI table generation
- migrated to dynamic traces
- mingw compilation fix

v2 -> v3 [Eric]:
- rebased on 2.9
- mostly code and patch reorganization to ease the review process
- optional patches removed. They may be handled separately. I am currently
  working on ACPI enablement.
- optional instantiation of the smmu in mach-virt
- removed [2/9] (fdt functions) since not mandated
- start splitting main patch into base and derived object
- no new function feature added

v1 -> v2 [Prem]:
- Adopted review comments from Eric Auger
- Make SMMU_DPRINTF to internally call qemu_log
(since translation requests are too many, we need control
 on the type of log we want)
- SMMUTransCfg modified to suite simplicity
- Change RegInfo to uint64 register array
- Code cleanup
- Test cleanups
- Reshuffled patches

v0 -> v1 [Prem]:
- As per SMMUv3 spec 16.0 (only is_ste_consistant() is noticeable)
- Reworked register access/update logic
- Factored out translation code for
- single point bug fix
- sharing/removal in future
- (optional) Unit tests added, with PCI test device
- S1 with 4k/64k, S1+S2 with 4k/64k
- (S1 or S2) only can be verified by Linux 4.7 driver
- (optional) Priliminary ACPI support

v0 [Prem]:
- Implements SMMUv3 spec 11.0
- Supported for PCIe devices,
- Command Queue and Event Queue supported
- LPAE only, S1 is supported and Tested, S2 not tested
- BE mode Translation not supported

[Qemu-devel] [PATCH v9 14/14] hw/arm/virt: Handle iommu in 2.12 machine type

2018-02-17 Thread Eric Auger
The new machine type exposes a new "iommu" virt machine option.
The SMMUv3 IOMMU is instantiated using -machine virt,iommu=smmuv3.

Signed-off-by: Eric Auger 

---
v7 -> v8:
- Revert to machine option, now dubbed "iommu", preparing for virtio
  instantiation.

v5 -> v6: machine 2_11

Another alternative would be to use the -device option as
done on x86. As the smmu is a sysbus device, we would need to
use the platform bus framework.
---
 hw/arm/virt.c | 45 +
 include/hw/arm/virt.h |  1 +
 2 files changed, 46 insertions(+)

diff --git a/hw/arm/virt.c b/hw/arm/virt.c
index e9dca0d..607c7e1 100644
--- a/hw/arm/virt.c
+++ b/hw/arm/virt.c
@@ -1547,6 +1547,34 @@ static void virt_set_gic_version(Object *obj, const char 
*value, Error **errp)
 }
 }
 
+static char *virt_get_iommu(Object *obj, Error **errp)
+{
+VirtMachineState *vms = VIRT_MACHINE(obj);
+
+switch (vms->iommu) {
+case VIRT_IOMMU_NONE:
+return g_strdup("none");
+case VIRT_IOMMU_SMMUV3:
+return g_strdup("smmuv3");
+default:
+return g_strdup("none");
+}
+}
+
+static void virt_set_iommu(Object *obj, const char *value, Error **errp)
+{
+VirtMachineState *vms = VIRT_MACHINE(obj);
+
+if (!strcmp(value, "smmuv3")) {
+vms->iommu = VIRT_IOMMU_SMMUV3;
+} else if (!strcmp(value, "none")) {
+vms->iommu = VIRT_IOMMU_NONE;
+} else {
+error_setg(errp, "Invalid iommu value");
+error_append_hint(errp, "Valid value are none, smmuv3\n");
+}
+}
+
 static CpuInstanceProperties
 virt_cpu_index_to_props(MachineState *ms, unsigned cpu_index)
 {
@@ -1679,6 +1707,19 @@ static void virt_2_12_instance_init(Object *obj)
 NULL);
 }
 
+if (vmc->no_iommu) {
+vms->iommu = VIRT_IOMMU_NONE;
+} else {
+/* Default disallows smmu instantiation */
+vms->iommu = VIRT_IOMMU_NONE;
+object_property_add_str(obj, "iommu", virt_get_iommu,
+ virt_set_iommu, NULL);
+object_property_set_description(obj, "iommu",
+"Set the IOMMU model among "
+"none, smmuv3 (default none)",
+NULL);
+}
+
 vms->memmap = a15memmap;
 vms->irqmap = a15irqmap;
 }
@@ -1698,8 +1739,12 @@ static void virt_2_11_instance_init(Object *obj)
 
 static void virt_machine_2_11_options(MachineClass *mc)
 {
+VirtMachineClass *vmc = VIRT_MACHINE_CLASS(OBJECT_CLASS(mc));
+
 virt_machine_2_12_options(mc);
 SET_MACHINE_COMPAT(mc, VIRT_COMPAT_2_11);
+
+vmc->no_iommu = true;
 }
 DEFINE_VIRT_MACHINE(2, 11)
 
diff --git a/include/hw/arm/virt.h b/include/hw/arm/virt.h
index 13d3724..3a92fc3 100644
--- a/include/hw/arm/virt.h
+++ b/include/hw/arm/virt.h
@@ -92,6 +92,7 @@ typedef struct {
 bool disallow_affinity_adjustment;
 bool no_its;
 bool no_pmu;
+bool no_iommu;
 bool claim_edge_triggered_timers;
 } VirtMachineClass;
 
-- 
2.5.5




Re: [Qemu-devel] [Qemu-arm] [PATCH 2/3] hw/sii9022: Add support for Silicon Image SII9022

2018-02-17 Thread Philippe Mathieu-Daudé
Hi Linus,

On 02/17/2018 11:00 AM, Linus Walleij wrote:
> This adds support for emulating the Silicon Image SII9022 DVI/HDMI
> bridge. It's not very clever right now, it just acknowledges
> the switch into DDC I2C mode and back. Combining this with the
> existing DDC I2C emulation gives the right behavior on the Versatile
> Express emulation passing through the QEMU EDID to the emulated
> platform.
> 
> Signed-off-by: Linus Walleij 
> ---
>  hw/display/Makefile.objs |   1 +
>  hw/display/sii9022.c | 185 
> +++
>  2 files changed, 186 insertions(+)
>  create mode 100644 hw/display/sii9022.c
> 
> diff --git a/hw/display/Makefile.objs b/hw/display/Makefile.objs
> index d3a4cb396eb9..3c7c75b94da5 100644
> --- a/hw/display/Makefile.objs
> +++ b/hw/display/Makefile.objs
> @@ -3,6 +3,7 @@ common-obj-$(CONFIG_VGA_CIRRUS) += cirrus_vga.o
>  common-obj-$(CONFIG_G364FB) += g364fb.o
>  common-obj-$(CONFIG_JAZZ_LED) += jazz_led.o
>  common-obj-$(CONFIG_PL110) += pl110.o
> +common-obj-$(CONFIG_SII9022) += sii9022.o
>  common-obj-$(CONFIG_SSD0303) += ssd0303.o
>  common-obj-$(CONFIG_SSD0323) += ssd0323.o
>  common-obj-$(CONFIG_XEN) += xenfb.o
> diff --git a/hw/display/sii9022.c b/hw/display/sii9022.c
> new file mode 100644
> index ..d6f3cdc04293
> --- /dev/null
> +++ b/hw/display/sii9022.c
> @@ -0,0 +1,185 @@
> +/*
> + * Silicon Image SiI9022
> + *
> + * This is a pretty hollow emulation: all we do is acknowledge that we
> + * exist (chip ID) and confirm that we get switched over into DDC mode
> + * so the emulated host can proceed to read out EDID data. All subsequent
> + * set-up of connectors etc will be acknowledged and ignored.
> + *
> + * Copyright (c) 2018 Linus Walleij
> + *
> + * This code is licensed under the GNU GPL v2.
> + *
> + * Contributions after 2012-01-13 are licensed under the terms of the
> + * GNU GPL, version 2 or (at your option) any later version.
> + */
> +
> +#include "qemu/osdep.h"
> +#include "qemu-common.h"
> +#include "hw/i2c/i2c.h"
> +
> +#define DEBUG_SII9022 0
> +
> +#define DPRINTF(fmt, ...) \
> +do { \
> +if (DEBUG_SII9022) { \
> +printf("sii9022: " fmt, ## __VA_ARGS__); \
> +} \
> +} while (0)

Can you replace DPRINTF() by trace events?

Except that, patch looks fine.

Regards,

Phil.

> +
> +#define SII9022_SYS_CTRL_DATA 0x1a
> +#define SII9022_SYS_CTRL_PWR_DWN 0x10
> +#define SII9022_SYS_CTRL_AV_MUTE 0x08
> +#define SII9022_SYS_CTRL_DDC_BUS_REQ 0x04
> +#define SII9022_SYS_CTRL_DDC_BUS_GRTD 0x02
> +#define SII9022_SYS_CTRL_OUTPUT_MODE 0x01
> +#define SII9022_SYS_CTRL_OUTPUT_HDMI 1
> +#define SII9022_SYS_CTRL_OUTPUT_DVI 0
> +#define SII9022_REG_CHIPID 0x1b
> +#define SII9022_INT_ENABLE 0x3c
> +#define SII9022_INT_STATUS 0x3d
> +#define SII9022_INT_STATUS_HOTPLUG 0x01;
> +#define SII9022_INT_STATUS_PLUGGED 0x04;
> +
> +#define TYPE_SII9022 "sii9022"
> +#define SII9022(obj) OBJECT_CHECK(sii9022_state, (obj), TYPE_SII9022)
> +
> +typedef struct sii9022_state {
> +I2CSlave parent_obj;
> +uint8_t ptr;
> +bool addr_byte;
> +bool ddc_req;
> +bool ddc_skip_finish;
> +bool ddc;
> +} sii9022_state;
> +
> +static const VMStateDescription vmstate_sii9022 = {
> +.name = "sii9022",
> +.version_id = 1,
> +.minimum_version_id = 1,
> +.fields = (VMStateField[]) {
> +VMSTATE_I2C_SLAVE(parent_obj, sii9022_state),
> +VMSTATE_UINT8(ptr, sii9022_state),
> +VMSTATE_BOOL(addr_byte, sii9022_state),
> +VMSTATE_BOOL(ddc_req, sii9022_state),
> +VMSTATE_BOOL(ddc_skip_finish, sii9022_state),
> +VMSTATE_BOOL(ddc, sii9022_state),
> +VMSTATE_END_OF_LIST()
> +}
> +};
> +
> +static int sii9022_event(I2CSlave *i2c, enum i2c_event event)
> +{
> +sii9022_state *s = SII9022(i2c);
> +
> +switch (event) {
> +case I2C_START_SEND:
> +s->addr_byte = true;
> +break;
> +case I2C_START_RECV:
> +break;
> +case I2C_FINISH:
> +break;
> +case I2C_NACK:
> +break;
> +}
> +
> +return 0;
> +}
> +
> +static int sii9022_rx(I2CSlave *i2c)
> +{
> +sii9022_state *s = SII9022(i2c);
> +uint8_t res = 0x00;
> +
> +switch (s->ptr) {
> +case SII9022_SYS_CTRL_DATA:
> +if (s->ddc_req) {
> +/* Acknowledge DDC bus request */
> +res = SII9022_SYS_CTRL_DDC_BUS_GRTD | 
> SII9022_SYS_CTRL_DDC_BUS_REQ;
> +}
> +break;
> +case SII9022_REG_CHIPID:
> +res = 0xb0;
> +break;
> +case SII9022_INT_STATUS:
> +/* Something is cold-plugged in, no interrupts */
> +res = SII9022_INT_STATUS_PLUGGED;
> +break;
> +default:
> +break;
> +}
> +DPRINTF("%02x read from %02x\n", res, s->ptr);
> +s->ptr++;
> +
> +return res;
> +}
> +
> +static int sii9022_tx(I2CSlave *i2c, uint8_t data)
> +{
> +sii9022_state *s = SII9022(i2c);
> +
> +if 

[Qemu-devel] [PATCH 0/2] MIPS Boston / pch_gbe ethernet support

2018-02-17 Thread Paul Burton
This short series introduces support for emulating the ethernet
controller found in the Intel EG20T Platform Controller Hub, and then
enables that device for the MIPS Boston board. This gives the Boston
board a network device matching that found on real Boston boards,
providing unmodified Boston Linux kernels with network access.

Applies atop master as of 5e8d6a12d643 ("Merge remote-tracking branch
'remotes/kraxel/tags/ui-20180216-pull-request' into staging").


Paul Burton (2):
  hw/net: Add support for Intel pch_gbe ethernet
  hw/mips/boston: Enable pch_gbe ethernet controller

 default-configs/mips64el-softmmu.mak |   1 +
 hw/mips/boston.c |   8 +-
 hw/net/Makefile.objs |   1 +
 hw/net/pch_gbe.c | 766 +++
 4 files changed, 775 insertions(+), 1 deletion(-)
 create mode 100644 hw/net/pch_gbe.c

-- 
2.16.1




[Qemu-devel] [PATCH v9 05/14] hw/arm/smmuv3: Wired IRQ and GERROR helpers

2018-02-17 Thread Eric Auger
We introduce some helpers to handle wired IRQs and especially
GERROR interrupt. SMMU writes GERROR register on GERROR event
and SW acks GERROR interrupts by setting GERRORn.

The Wired interrupts are edge sensitive hence the pulse usage.

Signed-off-by: Eric Auger 

---

v7 -> v8:
- remove SMMU_PENDING_GERRORS macro
- properly toggle gerror
- properly sanitize gerrorn write
---
 hw/arm/smmuv3-internal.h | 10 
 hw/arm/smmuv3.c  | 64 
 hw/arm/trace-events  |  3 +++
 3 files changed, 77 insertions(+)

diff --git a/hw/arm/smmuv3-internal.h b/hw/arm/smmuv3-internal.h
index 5be8303..40b39a1 100644
--- a/hw/arm/smmuv3-internal.h
+++ b/hw/arm/smmuv3-internal.h
@@ -152,4 +152,14 @@ static inline uint64_t smmu_read64(uint64_t r, unsigned 
offset,
 return extract64(r, offset << 3, 32);
 }
 
+/* Interrupts */
+
+#define smmuv3_eventq_irq_enabled(s)   \
+(FIELD_EX32(s->irq_ctrl, IRQ_CTRL, EVENTQ_IRQEN))
+#define smmuv3_gerror_irq_enabled(s)  \
+(FIELD_EX32(s->irq_ctrl, IRQ_CTRL, GERROR_IRQEN))
+
+void smmuv3_trigger_irq(SMMUv3State *s, SMMUIrq irq, uint32_t gerror_mask);
+void smmuv3_write_gerrorn(SMMUv3State *s, uint32_t gerrorn);
+
 #endif
diff --git a/hw/arm/smmuv3.c b/hw/arm/smmuv3.c
index dc03c9e..8779d3f 100644
--- a/hw/arm/smmuv3.c
+++ b/hw/arm/smmuv3.c
@@ -30,6 +30,70 @@
 #include "hw/arm/smmuv3.h"
 #include "smmuv3-internal.h"
 
+/**
+ * smmuv3_trigger_irq - pulse @irq if enabled and update
+ * GERROR register in case of GERROR interrupt
+ *
+ * @irq: irq type
+ * @gerror_mask: mask of gerrors to toggle (relevant if @irq is GERROR)
+ */
+void smmuv3_trigger_irq(SMMUv3State *s, SMMUIrq irq, uint32_t gerror_mask)
+{
+
+bool pulse = false;
+
+switch (irq) {
+case SMMU_IRQ_EVTQ:
+pulse = smmuv3_eventq_irq_enabled(s);
+break;
+case SMMU_IRQ_PRIQ:
+error_setg(_fatal, "PRI not supported");
+break;
+case SMMU_IRQ_CMD_SYNC:
+pulse = true;
+break;
+case SMMU_IRQ_GERROR:
+{
+uint32_t pending = s->gerror ^ s->gerrorn;
+uint32_t new_gerrors = ~pending & gerror_mask;
+
+if (!new_gerrors) {
+/* only toggle non pending errors */
+return;
+}
+s->gerror ^= new_gerrors;
+trace_smmuv3_write_gerror(new_gerrors, s->gerror);
+
+/* pulse the GERROR irq only if all previous gerrors were acked */
+pulse = smmuv3_gerror_irq_enabled(s) && !pending;
+break;
+}
+}
+if (pulse) {
+trace_smmuv3_trigger_irq(irq);
+qemu_irq_pulse(s->irq[irq]);
+}
+}
+
+void smmuv3_write_gerrorn(SMMUv3State *s, uint32_t new_gerrorn)
+{
+uint32_t pending = s->gerror ^ s->gerrorn;
+uint32_t toggled = s->gerrorn ^ new_gerrorn;
+uint32_t acked;
+
+if (toggled & ~pending) {
+qemu_log_mask(LOG_GUEST_ERROR,
+  "guest toggles non pending errors = 0x%x\n",
+  toggled & ~pending);
+}
+
+/* Make sure SW does not toggle irqs that are not active */
+acked = toggled & pending;
+s->gerrorn ^= acked;
+
+trace_smmuv3_write_gerrorn(acked, s->gerrorn);
+}
+
 static void smmuv3_init_regs(SMMUv3State *s)
 {
 /**
diff --git a/hw/arm/trace-events b/hw/arm/trace-events
index 64d2b9b..2ddae40 100644
--- a/hw/arm/trace-events
+++ b/hw/arm/trace-events
@@ -15,3 +15,6 @@ smmu_get_pte(uint64_t baseaddr, int index, uint64_t pteaddr, 
uint64_t pte) "base
 
 #hw/arm/smmuv3.c
 smmuv3_read_mmio(hwaddr addr, uint64_t val, unsigned size) "addr: 0x%"PRIx64" 
val:0x%"PRIx64" size: 0x%x"
+smmuv3_trigger_irq(int irq) "irq=%d"
+smmuv3_write_gerror(uint32_t toggled, uint32_t gerror) "toggled=0x%x, new 
gerror=0x%x"
+smmuv3_write_gerrorn(uint32_t acked, uint32_t gerrorn) "acked=0x%x, new 
gerrorn=0x%x"
-- 
2.5.5




[Qemu-devel] [PATCH v2 63/67] target/arm: Implement SVE floating-point trig multiply-add coefficient

2018-02-17 Thread Richard Henderson
Signed-off-by: Richard Henderson 
---
 target/arm/helper-sve.h|  4 +++
 target/arm/sve_helper.c| 70 ++
 target/arm/translate-sve.c | 26 +
 target/arm/sve.decode  |  3 ++
 4 files changed, 103 insertions(+)

diff --git a/target/arm/helper-sve.h b/target/arm/helper-sve.h
index 696c97648b..ce5fe24dc2 100644
--- a/target/arm/helper-sve.h
+++ b/target/arm/helper-sve.h
@@ -1037,6 +1037,10 @@ DEF_HELPER_FLAGS_3(sve_fnmls_zpzzz_h, TCG_CALL_NO_RWG, 
void, env, ptr, i32)
 DEF_HELPER_FLAGS_3(sve_fnmls_zpzzz_s, TCG_CALL_NO_RWG, void, env, ptr, i32)
 DEF_HELPER_FLAGS_3(sve_fnmls_zpzzz_d, TCG_CALL_NO_RWG, void, env, ptr, i32)
 
+DEF_HELPER_FLAGS_5(sve_ftmad_h, TCG_CALL_NO_RWG, void, ptr, ptr, ptr, ptr, i32)
+DEF_HELPER_FLAGS_5(sve_ftmad_s, TCG_CALL_NO_RWG, void, ptr, ptr, ptr, ptr, i32)
+DEF_HELPER_FLAGS_5(sve_ftmad_d, TCG_CALL_NO_RWG, void, ptr, ptr, ptr, ptr, i32)
+
 DEF_HELPER_FLAGS_4(sve_ld1bb_r, TCG_CALL_NO_WG, void, env, ptr, tl, i32)
 DEF_HELPER_FLAGS_4(sve_ld2bb_r, TCG_CALL_NO_WG, void, env, ptr, tl, i32)
 DEF_HELPER_FLAGS_4(sve_ld3bb_r, TCG_CALL_NO_WG, void, env, ptr, tl, i32)
diff --git a/target/arm/sve_helper.c b/target/arm/sve_helper.c
index 6a052ce9ad..53e3516f47 100644
--- a/target/arm/sve_helper.c
+++ b/target/arm/sve_helper.c
@@ -3338,6 +3338,76 @@ DO_FPCMP_PPZ0_ALL(sve_fcmlt0, DO_FCMLT)
 DO_FPCMP_PPZ0_ALL(sve_fcmeq0, DO_FCMEQ)
 DO_FPCMP_PPZ0_ALL(sve_fcmne0, DO_FCMNE)
 
+/* FP Trig Multiply-Add. */
+
+void HELPER(sve_ftmad_h)(void *vd, void *vn, void *vm, void *vs, uint32_t desc)
+{
+static const float16 coeff[16] = {
+0x3c00, 0xb155, 0x2030, 0x, 0x, 0x, 0x, 0x,
+0x3c00, 0xb800, 0x293a, 0x, 0x, 0x, 0x, 0x,
+};
+intptr_t i, opr_sz = simd_oprsz(desc) / sizeof(float16);
+intptr_t x = simd_data(desc);
+float16 *d = vd, *n = vn, *m = vm;
+for (i = 0; i < opr_sz; i++) {
+float16 mm = m[i];
+intptr_t xx = x;
+if (float16_is_neg(mm)) {
+mm = float16_abs(mm);
+xx += 8;
+}
+d[i] = float16_muladd(n[i], mm, coeff[xx], 0, vs);
+}
+}
+
+void HELPER(sve_ftmad_s)(void *vd, void *vn, void *vm, void *vs, uint32_t desc)
+{
+static const float32 coeff[16] = {
+0x3f80, 0xbe2b, 0x3c06, 0xb95008b9,
+0x36369d6d, 0x, 0x, 0x,
+0x3f80, 0xbf00, 0x3d26, 0xbab60705,
+0x37cd37cc, 0x, 0x, 0x,
+};
+intptr_t i, opr_sz = simd_oprsz(desc) / sizeof(float32);
+intptr_t x = simd_data(desc);
+float32 *d = vd, *n = vn, *m = vm;
+for (i = 0; i < opr_sz; i++) {
+float32 mm = m[i];
+intptr_t xx = x;
+if (float32_is_neg(mm)) {
+mm = float32_abs(mm);
+xx += 8;
+}
+d[i] = float32_muladd(n[i], mm, coeff[xx], 0, vs);
+}
+}
+
+void HELPER(sve_ftmad_d)(void *vd, void *vn, void *vm, void *vs, uint32_t desc)
+{
+static const float64 coeff[16] = {
+0x3ff0ull, 0xbfc55543ull,
+0x3f80f30cull, 0xbf2a01a019b92fc6ull,
+0x3ec71de351f3d22bull, 0xbe5ae5e2b60f7b91ull,
+0x3de5d8408868552full, 0xull,
+0x3ff0ull, 0xbfe0ull,
+0x3fa55536ull, 0xbf56c16c16c13a0bull,
+0x3efa01a019b1e8d8ull, 0xbe927e4f7282f468ull,
+0x3e21ee96d2641b13ull, 0xbda8f76380fbb401ull,
+};
+intptr_t i, opr_sz = simd_oprsz(desc) / sizeof(float64);
+intptr_t x = simd_data(desc);
+float64 *d = vd, *n = vn, *m = vm;
+for (i = 0; i < opr_sz; i++) {
+float64 mm = m[i];
+intptr_t xx = x;
+if (float64_is_neg(mm)) {
+mm = float64_abs(mm);
+xx += 8;
+}
+d[i] = float64_muladd(n[i], mm, coeff[xx], 0, vs);
+}
+}
+
 /*
  * Load contiguous data, protected by a governing predicate.
  */
diff --git a/target/arm/translate-sve.c b/target/arm/translate-sve.c
index 02655bff03..e185af29e3 100644
--- a/target/arm/translate-sve.c
+++ b/target/arm/translate-sve.c
@@ -3319,6 +3319,32 @@ DO_PPZ(FCMNE_ppz0, fcmne0)
 
 #undef DO_PPZ
 
+/*
+ *** SVE floating-point trig multiply-add coefficient
+ */
+
+static void trans_FTMAD(DisasContext *s, arg_FTMAD *a, uint32_t insn)
+{
+static gen_helper_gvec_3_ptr * const fns[3] = {
+gen_helper_sve_ftmad_h,
+gen_helper_sve_ftmad_s,
+gen_helper_sve_ftmad_d,
+};
+unsigned vsz = vec_full_reg_size(s);
+TCGv_ptr status;
+
+if (a->esz == 0) {
+unallocated_encoding(s);
+return;
+}
+status = get_fpstatus_ptr(a->esz == MO_16);
+tcg_gen_gvec_3_ptr(vec_full_reg_offset(s, a->rd),
+   vec_full_reg_offset(s, a->rn),
+   vec_full_reg_offset(s, a->rm),
+   status, vsz, vsz, a->imm, fns[a->esz - 1]);
+

[Qemu-devel] [PATCH v9 02/14] hw/arm/smmu-common: IOMMU memory region and address space setup

2018-02-17 Thread Eric Auger
We enumerate all the PCI devices attached to the SMMU and
initialize an associated IOMMU memory region and address space.
This happens on SMMU base instance init.

Those info are stored in SMMUDevice objects. The devices are
grouped according to the PCIBus they belong to. A hash table
indexed by the PCIBus poinet is used. Also an array indexed by
the bus number allows to find the list of SMMUDevices.

Signed-off-by: Eric Auger 

---
v8 -> v9:
- fix key value for lookup

v7 -> v8:
- introduce SMMU_MAX_VA_BITS
- use PCI bus handle as a key
- do not clear s->smmu_as_by_bus_num
- use g_new0 instead of g_malloc0
- use primary_bus field
---
 hw/arm/smmu-common.c | 59 
 include/hw/arm/smmu-common.h |  6 +
 2 files changed, 65 insertions(+)

diff --git a/hw/arm/smmu-common.c b/hw/arm/smmu-common.c
index 86a5aab..d0516dc 100644
--- a/hw/arm/smmu-common.c
+++ b/hw/arm/smmu-common.c
@@ -28,12 +28,71 @@
 #include "qemu/error-report.h"
 #include "hw/arm/smmu-common.h"
 
+SMMUPciBus *smmu_find_as_from_bus_num(SMMUState *s, uint8_t bus_num)
+{
+SMMUPciBus *smmu_pci_bus = s->smmu_as_by_bus_num[bus_num];
+
+if (!smmu_pci_bus) {
+GHashTableIter iter;
+
+g_hash_table_iter_init(, s->smmu_as_by_busptr);
+while (g_hash_table_iter_next(, NULL, (void **)_pci_bus)) {
+if (pci_bus_num(smmu_pci_bus->bus) == bus_num) {
+s->smmu_as_by_bus_num[bus_num] = smmu_pci_bus;
+return smmu_pci_bus;
+}
+}
+}
+return smmu_pci_bus;
+}
+
+static AddressSpace *smmu_find_add_as(PCIBus *bus, void *opaque, int devfn)
+{
+SMMUState *s = opaque;
+SMMUPciBus *sbus = g_hash_table_lookup(s->smmu_as_by_busptr, bus);
+SMMUDevice *sdev;
+
+if (!sbus) {
+sbus = g_malloc0(sizeof(SMMUPciBus) +
+ sizeof(SMMUDevice *) * SMMU_PCI_DEVFN_MAX);
+sbus->bus = bus;
+g_hash_table_insert(s->smmu_as_by_busptr, bus, sbus);
+}
+
+sdev = sbus->pbdev[devfn];
+if (!sdev) {
+char *name = g_strdup_printf("%s-%d-%d",
+ s->mrtypename,
+ pci_bus_num(bus), devfn);
+sdev = sbus->pbdev[devfn] = g_new0(SMMUDevice, 1);
+
+sdev->smmu = s;
+sdev->bus = bus;
+sdev->devfn = devfn;
+
+memory_region_init_iommu(>iommu, sizeof(sdev->iommu),
+ s->mrtypename,
+ OBJECT(s), name, 1ULL << SMMU_MAX_VA_BITS);
+address_space_init(>as,
+   MEMORY_REGION(>iommu), name);
+}
+
+return >as;
+}
+
 static void smmu_base_realize(DeviceState *dev, Error **errp)
 {
 SMMUState *s = ARM_SMMU(dev);
 
 s->configs = g_hash_table_new_full(NULL, NULL, NULL, g_free);
 s->iotlb = g_hash_table_new_full(NULL, NULL, NULL, g_free);
+s->smmu_as_by_busptr = g_hash_table_new(NULL, NULL);
+
+if (s->primary_bus) {
+pci_setup_iommu(s->primary_bus, smmu_find_add_as, s);
+} else {
+error_setg(errp, "SMMU is not attached to any PCI bus!");
+}
 }
 
 static void smmu_base_reset(DeviceState *dev)
diff --git a/include/hw/arm/smmu-common.h b/include/hw/arm/smmu-common.h
index 8a9d931..aee96c2 100644
--- a/include/hw/arm/smmu-common.h
+++ b/include/hw/arm/smmu-common.h
@@ -121,4 +121,10 @@ typedef struct {
 #define ARM_SMMU_GET_CLASS(obj)  \
 OBJECT_GET_CLASS(SMMUBaseClass, (obj), TYPE_ARM_SMMU)
 
+SMMUPciBus *smmu_find_as_from_bus_num(SMMUState *s, uint8_t bus_num);
+
+static inline uint16_t smmu_get_sid(SMMUDevice *sdev)
+{
+return  ((pci_bus_num(sdev->bus) & 0xff) << 8) | sdev->devfn;
+}
 #endif  /* HW_ARM_SMMU_COMMON */
-- 
2.5.5




[Qemu-devel] [PATCH v2 56/67] target/arm: Implement SVE scatter store vector immediate

2018-02-17 Thread Richard Henderson
Signed-off-by: Richard Henderson 
---
 target/arm/translate-sve.c | 79 +++---
 target/arm/sve.decode  | 11 +++
 2 files changed, 65 insertions(+), 25 deletions(-)

diff --git a/target/arm/translate-sve.c b/target/arm/translate-sve.c
index 6484ecd257..0241e8e707 100644
--- a/target/arm/translate-sve.c
+++ b/target/arm/translate-sve.c
@@ -4011,31 +4011,33 @@ static void trans_LD1_zpiz(DisasContext *s, 
arg_LD1_zpiz *a, uint32_t insn)
 tcg_temp_free_i64(imm);
 }
 
+/* Indexed by [xs][msz].  */
+static gen_helper_gvec_mem_scatter * const scatter_store_fn32[2][3] = {
+{ gen_helper_sve_stbs_zsu,
+  gen_helper_sve_sths_zsu,
+  gen_helper_sve_stss_zsu, },
+{ gen_helper_sve_stbs_zss,
+  gen_helper_sve_sths_zss,
+  gen_helper_sve_stss_zss, },
+};
+
+static gen_helper_gvec_mem_scatter * const scatter_store_fn64[3][4] = {
+{ gen_helper_sve_stbd_zsu,
+  gen_helper_sve_sthd_zsu,
+  gen_helper_sve_stsd_zsu,
+  gen_helper_sve_stdd_zsu, },
+{ gen_helper_sve_stbd_zss,
+  gen_helper_sve_sthd_zss,
+  gen_helper_sve_stsd_zss,
+  gen_helper_sve_stdd_zss, },
+{ gen_helper_sve_stbd_zd,
+  gen_helper_sve_sthd_zd,
+  gen_helper_sve_stsd_zd,
+  gen_helper_sve_stdd_zd, },
+};
+
 static void trans_ST1_zprz(DisasContext *s, arg_ST1_zprz *a, uint32_t insn)
 {
-/* Indexed by [xs][msz].  */
-static gen_helper_gvec_mem_scatter * const fn32[2][3] = {
-{ gen_helper_sve_stbs_zsu,
-  gen_helper_sve_sths_zsu,
-  gen_helper_sve_stss_zsu, },
-{ gen_helper_sve_stbs_zss,
-  gen_helper_sve_sths_zss,
-  gen_helper_sve_stss_zss, },
-};
-static gen_helper_gvec_mem_scatter * const fn64[3][4] = {
-{ gen_helper_sve_stbd_zsu,
-  gen_helper_sve_sthd_zsu,
-  gen_helper_sve_stsd_zsu,
-  gen_helper_sve_stdd_zsu, },
-{ gen_helper_sve_stbd_zss,
-  gen_helper_sve_sthd_zss,
-  gen_helper_sve_stsd_zss,
-  gen_helper_sve_stdd_zss, },
-{ gen_helper_sve_stbd_zd,
-  gen_helper_sve_sthd_zd,
-  gen_helper_sve_stsd_zd,
-  gen_helper_sve_stdd_zd, },
-};
 gen_helper_gvec_mem_scatter *fn;
 
 if (a->esz < a->msz || (a->msz == 0 && a->scale)) {
@@ -4044,10 +4046,10 @@ static void trans_ST1_zprz(DisasContext *s, 
arg_ST1_zprz *a, uint32_t insn)
 }
 switch (a->esz) {
 case MO_32:
-fn = fn32[a->xs][a->msz];
+fn = scatter_store_fn32[a->xs][a->msz];
 break;
 case MO_64:
-fn = fn64[a->xs][a->msz];
+fn = scatter_store_fn64[a->xs][a->msz];
 break;
 default:
 g_assert_not_reached();
@@ -4056,6 +4058,33 @@ static void trans_ST1_zprz(DisasContext *s, arg_ST1_zprz 
*a, uint32_t insn)
cpu_reg_sp(s, a->rn), fn);
 }
 
+static void trans_ST1_zpiz(DisasContext *s, arg_ST1_zpiz *a, uint32_t insn)
+{
+gen_helper_gvec_mem_scatter *fn = NULL;
+TCGv_i64 imm;
+
+if (a->esz < a->msz) {
+unallocated_encoding(s);
+return;
+}
+
+switch (a->esz) {
+case MO_32:
+fn = scatter_store_fn32[0][a->msz];
+break;
+case MO_64:
+fn = scatter_store_fn64[2][a->msz];
+break;
+}
+assert(fn != NULL);
+
+/* Treat ST1_zpiz (zn[x] + imm) the same way as ST1_zprz (rn + zm[x])
+   by loading the immediate into the scalar parameter.  */
+imm = tcg_const_i64(a->imm << a->msz);
+do_mem_zpz(s, a->rd, a->pg, a->rn, 0, imm, fn);
+tcg_temp_free_i64(imm);
+}
+
 /*
  * Prefetches
  */
diff --git a/target/arm/sve.decode b/target/arm/sve.decode
index f85d82e009..6ccb4289fc 100644
--- a/target/arm/sve.decode
+++ b/target/arm/sve.decode
@@ -84,6 +84,7 @@
 _gather_load  rd pg rn rm esz msz u ff xs scale
 _gather_load  rd pg rn imm esz msz u ff
 _scatter_storerd pg rn rm esz msz xs scale
+_scatter_storerd pg rn imm esz msz
 
 ###
 # Named instruction formats.  These are generally used to
@@ -216,6 +217,8 @@
_store nreg=0
 @rprr_scatter_store ... msz:2 .. rm:5 ... pg:3 rn:5 rd:5 \
_scatter_store
+@rpri_scatter_store ... msz:2 ..imm:5 ... pg:3 rn:5 rd:5 \
+   _scatter_store
 
 ###
 # Instruction patterns.  Grouped according to the SVE encodingindex.xhtml.
@@ -935,6 +938,14 @@ ST1_zprz   1110010 .. 01 . 101 ... . . \
 ST1_zprz   1110010 .. 00 . 101 ... . . \
@rprr_scatter_store xs=2 esz=3 scale=0
 
+# SVE 64-bit scatter store (vector plus immediate)
+ST1_zpiz   1110010 .. 10 . 101 ... . . \
+   @rpri_scatter_store esz=3
+
+# SVE 32-bit scatter store (vector plus immediate)
+ST1_zpiz   1110010 .. 11 . 101 ... . . \
+ 

[Qemu-devel] [PATCH 1/2] hw/net: Add support for Intel pch_gbe ethernet

2018-02-17 Thread Paul Burton
This patch introduces support for emulating the ethernet controller
found in the Intel EG20T Platform Controller Hub, referred to as pch_gbe
for consistency with both Linux & U-Boot.

Documentation for the hardware can be found here:

  
https://www.intel.com/content/www/us/en/intelligent-systems/queens-bay/platform-controller-hub-eg20t-datasheet.html

The device is used on MIPS Boston development boards as well as the
Intel Crown Bay platform including devices such as the Minnowboard V1.

Enough functionality is implemented for Linux to make use of the device,
and has been tested using Linux v4.16-rc1.

Signed-off-by: Paul Burton 
Cc: Aurelien Jarno 
Cc: Yongbok Kim 
---

 hw/net/Makefile.objs |   1 +
 hw/net/pch_gbe.c | 766 +++
 2 files changed, 767 insertions(+)
 create mode 100644 hw/net/pch_gbe.c

diff --git a/hw/net/Makefile.objs b/hw/net/Makefile.objs
index ab22968641..08706d9a96 100644
--- a/hw/net/Makefile.objs
+++ b/hw/net/Makefile.objs
@@ -12,6 +12,7 @@ common-obj-$(CONFIG_E1000E_PCI) += e1000e.o e1000e_core.o 
e1000x_common.o
 common-obj-$(CONFIG_RTL8139_PCI) += rtl8139.o
 common-obj-$(CONFIG_VMXNET3_PCI) += net_tx_pkt.o net_rx_pkt.o
 common-obj-$(CONFIG_VMXNET3_PCI) += vmxnet3.o
+common-obj-$(CONFIG_PCH_GBE_PCI) += pch_gbe.o
 
 common-obj-$(CONFIG_SMC91C111) += smc91c111.o
 common-obj-$(CONFIG_LAN9118) += lan9118.o
diff --git a/hw/net/pch_gbe.c b/hw/net/pch_gbe.c
new file mode 100644
index 00..be9a9f5916
--- /dev/null
+++ b/hw/net/pch_gbe.c
@@ -0,0 +1,766 @@
+#include "qemu/osdep.h"
+#include "hw/hw.h"
+#include "hw/net/mii.h"
+#include "hw/pci/pci.h"
+#include "net/checksum.h"
+#include "net/eth.h"
+#include "net/net.h"
+#include "qemu/bitops.h"
+#include "qemu/log.h"
+
+#define TYPE_PCH_GBE"pch_gbe"
+#define PCH_GBE(obj)OBJECT_CHECK(PCHGBEState, (obj), TYPE_PCH_GBE)
+
+#define PCH_GBE_INTR_RX_DMA_CMPLT   BIT(0)
+#define PCH_GBE_INTR_RX_VALID   BIT(1)
+#define PCH_GBE_INTR_RX_FRAME_ERR   BIT(2)
+#define PCH_GBE_INTR_RX_FIFO_ERRBIT(3)
+#define PCH_GBE_INTR_RX_DMA_ERR BIT(4)
+#define PCH_GBE_INTR_RX_DSC_EMP BIT(5)
+#define PCH_GBE_INTR_TX_CMPLT   BIT(8)
+#define PCH_GBE_INTR_TX_DMA_CMPLT   BIT(9)
+#define PCH_GBE_INTR_TX_FIFO_ERRBIT(10)
+#define PCH_GBE_INTR_TX_DMA_ERR BIT(11)
+#define PCH_GBE_INTR_PAUSE_CMPLTBIT(12)
+#define PCH_GBE_INTR_MIIM_CMPLT BIT(16)
+#define PCH_GBE_INTR_PHY_INTBIT(20)
+#define PCH_GBE_INTR_WOL_DETBIT(24)
+#define PCH_GBE_INTR_TCPIP_ERR  BIT(28)
+#define PCH_GBE_INTR_ALL (  \
+PCH_GBE_INTR_RX_DMA_CMPLT | \
+PCH_GBE_INTR_RX_VALID | \
+PCH_GBE_INTR_RX_FRAME_ERR | \
+PCH_GBE_INTR_RX_FIFO_ERR |  \
+PCH_GBE_INTR_RX_DMA_ERR |   \
+PCH_GBE_INTR_RX_DSC_EMP |   \
+PCH_GBE_INTR_TX_CMPLT | \
+PCH_GBE_INTR_TX_DMA_CMPLT | \
+PCH_GBE_INTR_TX_FIFO_ERR |  \
+PCH_GBE_INTR_TX_DMA_ERR |   \
+PCH_GBE_INTR_PAUSE_CMPLT |  \
+PCH_GBE_INTR_MIIM_CMPLT |   \
+PCH_GBE_INTR_PHY_INT |  \
+PCH_GBE_INTR_WOL_DET |  \
+PCH_GBE_INTR_TCPIP_ERR)
+
+struct pch_gbe_tx_desc {
+uint32_t addr;
+
+uint32_t len;
+#define PCH_GBE_TX_LENGTH   0x
+
+uint32_t control;
+#define PCH_GBE_TX_CONTROL_EOB  0x3
+#define PCH_GBE_TX_CONTROL_WORDS0xfffc
+#define PCH_GBE_TX_CONTROL_APAD BIT(16)
+#define PCH_GBE_TX_CONTROL_ICRC BIT(17)
+#define PCH_GBE_TX_CONTROL_ITAG BIT(18)
+#define PCH_GBE_TX_CONTROL_ACCOFF   BIT(19)
+
+uint32_t status;
+#define PCH_GBE_TX_STATUS_TSHRT BIT(22)
+#define PCH_GBE_TX_STATUS_TLNG  BIT(23)
+#define PCH_GBE_TX_STATUS_ABT   BIT(28)
+#define PCH_GBE_TX_STATUS_CMPLT BIT(29)
+};
+
+struct pch_gbe_rx_desc {
+uint32_t addr;
+
+uint32_t acc_status;
+
+uint32_t mac_status;
+#define PCH_GBE_RX_MAC_STATUS_EOB   0x3
+#define PCH_GBE_RX_MAC_STATUS_WORDS 0xfffc
+#define PCH_GBE_RX_MAC_STATUS_LENGTH0x
+#define PCH_GBE_RX_MAC_STATUS_TSHRT BIT(19)
+#define PCH_GBE_RX_MAC_STATUS_TLNG  BIT(20)
+
+uint32_t dma_status;
+};
+
+typedef struct {
+/*< private >*/
+PCIDevice parent_obj;
+/*< public >*/
+
+NICState *nic;
+NICConf conf;
+
+bool reset;
+bool phy_reset;
+
+bool link;
+
+uint32_t intr_status;
+uint32_t intr_status_hold;
+uint32_t intr_enable;
+
+uint16_t addr_mask;
+
+bool rx_enable;
+bool rx_dma_enable;
+bool rx_acc_enable;
+bool rx_acc_csum_off;
+uint32_t rx_desc_base;
+uint32_t rx_desc_size;
+uint32_t rx_desc_hard_ptr;
+uint32_t rx_desc_hard_ptr_hold;
+uint32_t rx_desc_soft_ptr;
+
+bool tx_dma_enable;
+bool tx_acc_enable;
+

[Qemu-devel] [PATCH v9 08/14] hw/arm/smmuv3: Event queue recording helper

2018-02-17 Thread Eric Auger
Let's introduce a helper function aiming at recording an
event in the event queue.

Signed-off-by: Eric Auger 

---

v8 -> v9:
- add SMMU_EVENT_STRING

v7 -> v8:
- use dma_addr_t instead of hwaddr in smmuv3_record_event()
- introduce struct SMMUEventInfo
- add event_stringify + helpers for all fields
---
 hw/arm/smmuv3-internal.h | 140 ++-
 hw/arm/smmuv3.c  |  91 +-
 hw/arm/trace-events  |   1 +
 3 files changed, 229 insertions(+), 3 deletions(-)

diff --git a/hw/arm/smmuv3-internal.h b/hw/arm/smmuv3-internal.h
index 5af97ae..3929f69 100644
--- a/hw/arm/smmuv3-internal.h
+++ b/hw/arm/smmuv3-internal.h
@@ -226,8 +226,6 @@ static inline void smmu_write_cmdq_err(SMMUv3State *s, 
uint32_t err_type)
 s->cmdq.cons = FIELD_DP32(s->cmdq.cons, CMDQ_CONS, ERR, err_type);
 }
 
-void smmuv3_write_eventq(SMMUv3State *s, Evt *evt);
-
 /* Commands */
 
 enum {
@@ -326,4 +324,142 @@ enum { /* Command completion notification */
 addr; \
 })
 
+/* Events */
+
+typedef enum SMMUEventType {
+SMMU_EVT_OK = 0x00,
+SMMU_EVT_F_UUT  = 0x01,
+SMMU_EVT_C_BAD_STREAMID = 0x02,
+SMMU_EVT_F_STE_FETCH= 0x03,
+SMMU_EVT_C_BAD_STE  = 0x04,
+SMMU_EVT_F_BAD_ATS_TREQ = 0x05,
+SMMU_EVT_F_STREAM_DISABLED  = 0x06,
+SMMU_EVT_F_TRANS_FORBIDDEN  = 0x07,
+SMMU_EVT_C_BAD_SUBSTREAMID  = 0x08,
+SMMU_EVT_F_CD_FETCH = 0x09,
+SMMU_EVT_C_BAD_CD   = 0x0a,
+SMMU_EVT_F_WALK_EABT= 0x0b,
+SMMU_EVT_F_TRANSLATION  = 0x10,
+SMMU_EVT_F_ADDR_SIZE= 0x11,
+SMMU_EVT_F_ACCESS   = 0x12,
+SMMU_EVT_F_PERMISSION   = 0x13,
+SMMU_EVT_F_TLB_CONFLICT = 0x20,
+SMMU_EVT_F_CFG_CONFLICT = 0x21,
+SMMU_EVT_E_PAGE_REQ = 0x24,
+} SMMUEventType;
+
+static const char *event_stringify[] = {
+[SMMU_EVT_OK]   = "SMMU_EVT_OK",
+[SMMU_EVT_F_UUT]= "SMMU_EVT_F_UUT",
+[SMMU_EVT_C_BAD_STREAMID]   = "SMMU_EVT_C_BAD_STREAMID",
+[SMMU_EVT_F_STE_FETCH]  = "SMMU_EVT_F_STE_FETCH",
+[SMMU_EVT_C_BAD_STE]= "SMMU_EVT_C_BAD_STE",
+[SMMU_EVT_F_BAD_ATS_TREQ]   = "SMMU_EVT_F_BAD_ATS_TREQ",
+[SMMU_EVT_F_STREAM_DISABLED]= "SMMU_EVT_F_STREAM_DISABLED",
+[SMMU_EVT_F_TRANS_FORBIDDEN]= "SMMU_EVT_F_TRANS_FORBIDDEN",
+[SMMU_EVT_C_BAD_SUBSTREAMID]= "SMMU_EVT_C_BAD_SUBSTREAMID",
+[SMMU_EVT_F_CD_FETCH]   = "SMMU_EVT_F_CD_FETCH",
+[SMMU_EVT_C_BAD_CD] = "SMMU_EVT_C_BAD_CD",
+[SMMU_EVT_F_WALK_EABT]  = "SMMU_EVT_F_WALK_EABT",
+[SMMU_EVT_F_TRANSLATION]= "SMMU_EVT_F_TRANSLATION",
+[SMMU_EVT_F_ADDR_SIZE]  = "SMMU_EVT_F_ADDR_SIZE",
+[SMMU_EVT_F_ACCESS] = "SMMU_EVT_F_ACCESS",
+[SMMU_EVT_F_PERMISSION] = "SMMU_EVT_F_PERMISSION",
+[SMMU_EVT_F_TLB_CONFLICT]   = "SMMU_EVT_F_TLB_CONFLICT",
+[SMMU_EVT_F_CFG_CONFLICT]   = "SMMU_EVT_F_CFG_CONFLICT",
+[SMMU_EVT_E_PAGE_REQ]   = "SMMU_EVT_E_PAGE_REQ",
+};
+
+#define SMMU_EVENT_STRING(event) ( \
+(event < ARRAY_SIZE(event_stringify)) ? event_stringify[event] : "UNKNOWN" \
+)
+
+typedef struct SMMUEventInfo {
+SMMUEventType type;
+uint32_t sid;
+bool recorded;
+bool record_trans_faults;
+union {
+struct {
+uint32_t ssid;
+bool ssv;
+dma_addr_t addr;
+bool rnw;
+bool pnu;
+bool ind;
+   } f_uut;
+   struct ssid_info {
+uint32_t ssid;
+bool ssv;
+   } c_bad_streamid;
+   struct ssid_addr_info {
+uint32_t ssid;
+bool ssv;
+dma_addr_t addr;
+   } f_ste_fetch;
+   struct ssid_info c_bad_ste;
+   struct {
+dma_addr_t addr;
+bool rnw;
+   } f_transl_forbidden;
+   struct {
+uint32_t ssid;
+   } c_bad_substream;
+   struct ssid_addr_info f_cd_fetch;
+   struct ssid_info c_bad_cd;
+   struct full_info {
+bool stall;
+uint16_t stag;
+uint32_t ssid;
+bool ssv;
+bool s2;
+dma_addr_t addr;
+bool rnw;
+bool pnu;
+bool ind;
+uint8_t class;
+dma_addr_t addr2;
+   } f_walk_eabt;
+   struct full_info f_translation;
+   struct full_info f_addr_size;
+   struct full_info f_access;
+   struct full_info f_permission;
+   struct ssid_info f_cfg_conflict;
+   /**
+* not supported yet:
+* F_BAD_ATS_TREQ
+* F_BAD_ATS_TREQ
+* F_TLB_CONFLICT
+* E_PAGE_REQUEST
+* IMPDEF_EVENTn

[Qemu-devel] [PATCH v2 66/67] target/arm: Implement SVE floating-point round to integral value

2018-02-17 Thread Richard Henderson
Signed-off-by: Richard Henderson 
---
 target/arm/helper-sve.h| 14 
 target/arm/sve_helper.c|  8 +
 target/arm/translate-sve.c | 80 ++
 target/arm/sve.decode  |  9 ++
 4 files changed, 111 insertions(+)

diff --git a/target/arm/helper-sve.h b/target/arm/helper-sve.h
index 0f5fea9045..749bab0b38 100644
--- a/target/arm/helper-sve.h
+++ b/target/arm/helper-sve.h
@@ -985,6 +985,20 @@ DEF_HELPER_FLAGS_5(sve_fcvtzu_sd, TCG_CALL_NO_RWG,
 DEF_HELPER_FLAGS_5(sve_fcvtzu_dd, TCG_CALL_NO_RWG,
void, ptr, ptr, ptr, ptr, i32)
 
+DEF_HELPER_FLAGS_5(sve_frint_h, TCG_CALL_NO_RWG,
+   void, ptr, ptr, ptr, ptr, i32)
+DEF_HELPER_FLAGS_5(sve_frint_s, TCG_CALL_NO_RWG,
+   void, ptr, ptr, ptr, ptr, i32)
+DEF_HELPER_FLAGS_5(sve_frint_d, TCG_CALL_NO_RWG,
+   void, ptr, ptr, ptr, ptr, i32)
+
+DEF_HELPER_FLAGS_5(sve_frintx_h, TCG_CALL_NO_RWG,
+   void, ptr, ptr, ptr, ptr, i32)
+DEF_HELPER_FLAGS_5(sve_frintx_s, TCG_CALL_NO_RWG,
+   void, ptr, ptr, ptr, ptr, i32)
+DEF_HELPER_FLAGS_5(sve_frintx_d, TCG_CALL_NO_RWG,
+   void, ptr, ptr, ptr, ptr, i32)
+
 DEF_HELPER_FLAGS_5(sve_scvt_hh, TCG_CALL_NO_RWG,
void, ptr, ptr, ptr, ptr, i32)
 DEF_HELPER_FLAGS_5(sve_scvt_sh, TCG_CALL_NO_RWG,
diff --git a/target/arm/sve_helper.c b/target/arm/sve_helper.c
index 09f5c77254..7950710be7 100644
--- a/target/arm/sve_helper.c
+++ b/target/arm/sve_helper.c
@@ -3200,6 +3200,14 @@ DO_ZPZ_FP_D(sve_fcvtzu_sd, uint64_t, 
float32_to_uint64_round_to_zero)
 DO_ZPZ_FP_D(sve_fcvtzu_ds, uint64_t, float64_to_uint32_round_to_zero)
 DO_ZPZ_FP_D(sve_fcvtzu_dd, uint64_t, float64_to_uint64_round_to_zero)
 
+DO_ZPZ_FP(sve_frint_h, uint16_t, H1_2, helper_advsimd_rinth)
+DO_ZPZ_FP(sve_frint_s, uint32_t, H1_4, helper_rints)
+DO_ZPZ_FP_D(sve_frint_d, uint64_t, helper_rintd)
+
+DO_ZPZ_FP(sve_frintx_h, uint16_t, H1_2, float16_round_to_int)
+DO_ZPZ_FP(sve_frintx_s, uint32_t, H1_4, float32_round_to_int)
+DO_ZPZ_FP_D(sve_frintx_d, uint64_t, float64_round_to_int)
+
 DO_ZPZ_FP(sve_scvt_hh, uint16_t, H1_2, int16_to_float16)
 DO_ZPZ_FP(sve_scvt_sh, uint32_t, H1_4, int32_to_float16)
 DO_ZPZ_FP(sve_scvt_ss, uint32_t, H1_4, int32_to_float32)
diff --git a/target/arm/translate-sve.c b/target/arm/translate-sve.c
index bc865dfd15..5f1c4984b8 100644
--- a/target/arm/translate-sve.c
+++ b/target/arm/translate-sve.c
@@ -3751,6 +3751,86 @@ static void trans_FCVTZU_dd(DisasContext *s, arg_rpr_esz 
*a, uint32_t insn)
 do_zpz_ptr(s, a->rd, a->rn, a->pg, false, gen_helper_sve_fcvtzu_dd);
 }
 
+static gen_helper_gvec_3_ptr * const frint_fns[3] = {
+gen_helper_sve_frint_h,
+gen_helper_sve_frint_s,
+gen_helper_sve_frint_d
+};
+
+static void trans_FRINTI(DisasContext *s, arg_rpr_esz *a, uint32_t insn)
+{
+if (a->esz == 0) {
+unallocated_encoding(s);
+} else {
+do_zpz_ptr(s, a->rd, a->rn, a->pg, a->esz == MO_16,
+   frint_fns[a->esz - 1]);
+}
+}
+
+static void trans_FRINTX(DisasContext *s, arg_rpr_esz *a, uint32_t insn)
+{
+static gen_helper_gvec_3_ptr * const fns[3] = {
+gen_helper_sve_frintx_h,
+gen_helper_sve_frintx_s,
+gen_helper_sve_frintx_d
+};
+if (a->esz == 0) {
+unallocated_encoding(s);
+} else {
+do_zpz_ptr(s, a->rd, a->rn, a->pg, a->esz == MO_16, fns[a->esz - 1]);
+}
+}
+
+static void do_frint_mode(DisasContext *s, arg_rpr_esz *a, int mode)
+{
+unsigned vsz = vec_full_reg_size(s);
+TCGv_i32 tmode;
+TCGv_ptr status;
+
+if (a->esz == 0) {
+unallocated_encoding(s);
+return;
+}
+
+tmode = tcg_const_i32(mode);
+status = get_fpstatus_ptr(a->esz == MO_16);
+gen_helper_set_rmode(tmode, tmode, status);
+
+tcg_gen_gvec_3_ptr(vec_full_reg_offset(s, a->rd),
+   vec_full_reg_offset(s, a->rn),
+   pred_full_reg_offset(s, a->pg),
+   status, vsz, vsz, 0, frint_fns[a->esz - 1]);
+
+gen_helper_set_rmode(tmode, tmode, status);
+tcg_temp_free_i32(tmode);
+tcg_temp_free_ptr(status);
+}
+
+static void trans_FRINTN(DisasContext *s, arg_rpr_esz *a, uint32_t insn)
+{
+do_frint_mode(s, a, float_round_nearest_even);
+}
+
+static void trans_FRINTP(DisasContext *s, arg_rpr_esz *a, uint32_t insn)
+{
+do_frint_mode(s, a, float_round_up);
+}
+
+static void trans_FRINTM(DisasContext *s, arg_rpr_esz *a, uint32_t insn)
+{
+do_frint_mode(s, a, float_round_down);
+}
+
+static void trans_FRINTZ(DisasContext *s, arg_rpr_esz *a, uint32_t insn)
+{
+do_frint_mode(s, a, float_round_to_zero);
+}
+
+static void trans_FRINTA(DisasContext *s, arg_rpr_esz *a, uint32_t insn)
+{
+do_frint_mode(s, a, float_round_ties_away);
+}
+
 static void trans_SCVTF_hh(DisasContext *s, arg_rpr_esz *a, uint32_t insn)
 {
 do_zpz_ptr(s, a->rd, a->rn, a->pg, true, 

[Qemu-devel] [PATCH v2 55/67] target/arm: Implement SVE gather loads

2018-02-17 Thread Richard Henderson
Signed-off-by: Richard Henderson 
---
 target/arm/helper-sve.h| 67 
 target/arm/sve_helper.c| 75 +++
 target/arm/translate-sve.c | 97 ++
 target/arm/sve.decode  | 53 +
 4 files changed, 292 insertions(+)

diff --git a/target/arm/helper-sve.h b/target/arm/helper-sve.h
index b5c093f2fd..3cb7ab9ef2 100644
--- a/target/arm/helper-sve.h
+++ b/target/arm/helper-sve.h
@@ -919,6 +919,73 @@ DEF_HELPER_FLAGS_4(sve_st1hd_r, TCG_CALL_NO_WG, void, env, 
ptr, tl, i32)
 
 DEF_HELPER_FLAGS_4(sve_st1sd_r, TCG_CALL_NO_WG, void, env, ptr, tl, i32)
 
+DEF_HELPER_FLAGS_6(sve_ldbsu_zsu, TCG_CALL_NO_WG,
+   void, env, ptr, ptr, ptr, tl, i32)
+DEF_HELPER_FLAGS_6(sve_ldhsu_zsu, TCG_CALL_NO_WG,
+   void, env, ptr, ptr, ptr, tl, i32)
+DEF_HELPER_FLAGS_6(sve_ldssu_zsu, TCG_CALL_NO_WG,
+   void, env, ptr, ptr, ptr, tl, i32)
+DEF_HELPER_FLAGS_6(sve_ldbss_zsu, TCG_CALL_NO_WG,
+   void, env, ptr, ptr, ptr, tl, i32)
+DEF_HELPER_FLAGS_6(sve_ldhss_zsu, TCG_CALL_NO_WG,
+   void, env, ptr, ptr, ptr, tl, i32)
+
+DEF_HELPER_FLAGS_6(sve_ldbsu_zss, TCG_CALL_NO_WG,
+   void, env, ptr, ptr, ptr, tl, i32)
+DEF_HELPER_FLAGS_6(sve_ldhsu_zss, TCG_CALL_NO_WG,
+   void, env, ptr, ptr, ptr, tl, i32)
+DEF_HELPER_FLAGS_6(sve_ldssu_zss, TCG_CALL_NO_WG,
+   void, env, ptr, ptr, ptr, tl, i32)
+DEF_HELPER_FLAGS_6(sve_ldbss_zss, TCG_CALL_NO_WG,
+   void, env, ptr, ptr, ptr, tl, i32)
+DEF_HELPER_FLAGS_6(sve_ldhss_zss, TCG_CALL_NO_WG,
+   void, env, ptr, ptr, ptr, tl, i32)
+
+DEF_HELPER_FLAGS_6(sve_ldbdu_zsu, TCG_CALL_NO_WG,
+   void, env, ptr, ptr, ptr, tl, i32)
+DEF_HELPER_FLAGS_6(sve_ldhdu_zsu, TCG_CALL_NO_WG,
+   void, env, ptr, ptr, ptr, tl, i32)
+DEF_HELPER_FLAGS_6(sve_ldsdu_zsu, TCG_CALL_NO_WG,
+   void, env, ptr, ptr, ptr, tl, i32)
+DEF_HELPER_FLAGS_6(sve_ldddu_zsu, TCG_CALL_NO_WG,
+   void, env, ptr, ptr, ptr, tl, i32)
+DEF_HELPER_FLAGS_6(sve_ldbds_zsu, TCG_CALL_NO_WG,
+   void, env, ptr, ptr, ptr, tl, i32)
+DEF_HELPER_FLAGS_6(sve_ldhds_zsu, TCG_CALL_NO_WG,
+   void, env, ptr, ptr, ptr, tl, i32)
+DEF_HELPER_FLAGS_6(sve_ldsds_zsu, TCG_CALL_NO_WG,
+   void, env, ptr, ptr, ptr, tl, i32)
+
+DEF_HELPER_FLAGS_6(sve_ldbdu_zss, TCG_CALL_NO_WG,
+   void, env, ptr, ptr, ptr, tl, i32)
+DEF_HELPER_FLAGS_6(sve_ldhdu_zss, TCG_CALL_NO_WG,
+   void, env, ptr, ptr, ptr, tl, i32)
+DEF_HELPER_FLAGS_6(sve_ldsdu_zss, TCG_CALL_NO_WG,
+   void, env, ptr, ptr, ptr, tl, i32)
+DEF_HELPER_FLAGS_6(sve_ldddu_zss, TCG_CALL_NO_WG,
+   void, env, ptr, ptr, ptr, tl, i32)
+DEF_HELPER_FLAGS_6(sve_ldbds_zss, TCG_CALL_NO_WG,
+   void, env, ptr, ptr, ptr, tl, i32)
+DEF_HELPER_FLAGS_6(sve_ldhds_zss, TCG_CALL_NO_WG,
+   void, env, ptr, ptr, ptr, tl, i32)
+DEF_HELPER_FLAGS_6(sve_ldsds_zss, TCG_CALL_NO_WG,
+   void, env, ptr, ptr, ptr, tl, i32)
+
+DEF_HELPER_FLAGS_6(sve_ldbdu_zd, TCG_CALL_NO_WG,
+   void, env, ptr, ptr, ptr, tl, i32)
+DEF_HELPER_FLAGS_6(sve_ldhdu_zd, TCG_CALL_NO_WG,
+   void, env, ptr, ptr, ptr, tl, i32)
+DEF_HELPER_FLAGS_6(sve_ldsdu_zd, TCG_CALL_NO_WG,
+   void, env, ptr, ptr, ptr, tl, i32)
+DEF_HELPER_FLAGS_6(sve_ldddu_zd, TCG_CALL_NO_WG,
+   void, env, ptr, ptr, ptr, tl, i32)
+DEF_HELPER_FLAGS_6(sve_ldbds_zd, TCG_CALL_NO_WG,
+   void, env, ptr, ptr, ptr, tl, i32)
+DEF_HELPER_FLAGS_6(sve_ldhds_zd, TCG_CALL_NO_WG,
+   void, env, ptr, ptr, ptr, tl, i32)
+DEF_HELPER_FLAGS_6(sve_ldsds_zd, TCG_CALL_NO_WG,
+   void, env, ptr, ptr, ptr, tl, i32)
+
 DEF_HELPER_FLAGS_6(sve_stbs_zsu, TCG_CALL_NO_WG,
void, env, ptr, ptr, ptr, tl, i32)
 DEF_HELPER_FLAGS_6(sve_sths_zsu, TCG_CALL_NO_WG,
diff --git a/target/arm/sve_helper.c b/target/arm/sve_helper.c
index 07b3d285f2..4edd3d4367 100644
--- a/target/arm/sve_helper.c
+++ b/target/arm/sve_helper.c
@@ -3546,6 +3546,81 @@ void HELPER(sve_st4dd_r)(CPUARMState *env, void *vg,
 }
 }
 
+/* Loads with a vector index.  */
+
+#define DO_LD1_ZPZ_S(NAME, TYPEI, TYPEM, FN)\
+void HELPER(NAME)(CPUARMState *env, void *vd, void *vg, void *vm,   \
+  target_ulong base, uint32_t desc) \
+{   \
+intptr_t i, oprsz = simd_oprsz(desc) / 8;   \
+unsigned scale = simd_data(desc);   \
+uintptr_t ra = GETPC(); \
+uint32_t *d = vd; TYPEI *m = 

[Qemu-devel] [PATCH v9 10/14] hw/arm/smmuv3: Abort on vfio or vhost case

2018-02-17 Thread Eric Auger
At the moment, the SMMUv3 does not support notification on
TLB invalidation. So let's abort as soon as such notifier gets
enabled.

Signed-off-by: Eric Auger 
---
 hw/arm/smmuv3.c | 11 +++
 1 file changed, 11 insertions(+)

diff --git a/hw/arm/smmuv3.c b/hw/arm/smmuv3.c
index 384393f..5efe933 100644
--- a/hw/arm/smmuv3.c
+++ b/hw/arm/smmuv3.c
@@ -1074,12 +1074,23 @@ static void smmuv3_class_init(ObjectClass *klass, void 
*data)
 dc->realize = smmu_realize;
 }
 
+static void smmuv3_notify_flag_changed(IOMMUMemoryRegion *iommu,
+   IOMMUNotifierFlag old,
+   IOMMUNotifierFlag new)
+{
+if (old == IOMMU_NOTIFIER_NONE) {
+error_setg(_fatal,
+   "SMMUV3: vhost and vfio notifiers not yet supported");
+}
+}
+
 static void smmuv3_iommu_memory_region_class_init(ObjectClass *klass,
   void *data)
 {
 IOMMUMemoryRegionClass *imrc = IOMMU_MEMORY_REGION_CLASS(klass);
 
 imrc->translate = smmuv3_translate;
+imrc->notify_flag_changed = smmuv3_notify_flag_changed;
 }
 
 static const TypeInfo smmuv3_type_info = {
-- 
2.5.5




[Qemu-devel] [PATCH v2 65/67] target/arm: Implement SVE floating-point convert to integer

2018-02-17 Thread Richard Henderson
Signed-off-by: Richard Henderson 
---
 target/arm/helper-sve.h| 30 
 target/arm/sve_helper.c| 16 +++
 target/arm/translate-sve.c | 70 ++
 target/arm/sve.decode  | 16 +++
 4 files changed, 132 insertions(+)

diff --git a/target/arm/helper-sve.h b/target/arm/helper-sve.h
index bac4bfdc60..0f5fea9045 100644
--- a/target/arm/helper-sve.h
+++ b/target/arm/helper-sve.h
@@ -955,6 +955,36 @@ DEF_HELPER_FLAGS_5(sve_fcvt_hd, TCG_CALL_NO_RWG,
 DEF_HELPER_FLAGS_5(sve_fcvt_sd, TCG_CALL_NO_RWG,
void, ptr, ptr, ptr, ptr, i32)
 
+DEF_HELPER_FLAGS_5(sve_fcvtzs_hh, TCG_CALL_NO_RWG,
+   void, ptr, ptr, ptr, ptr, i32)
+DEF_HELPER_FLAGS_5(sve_fcvtzs_hs, TCG_CALL_NO_RWG,
+   void, ptr, ptr, ptr, ptr, i32)
+DEF_HELPER_FLAGS_5(sve_fcvtzs_ss, TCG_CALL_NO_RWG,
+   void, ptr, ptr, ptr, ptr, i32)
+DEF_HELPER_FLAGS_5(sve_fcvtzs_ds, TCG_CALL_NO_RWG,
+   void, ptr, ptr, ptr, ptr, i32)
+DEF_HELPER_FLAGS_5(sve_fcvtzs_hd, TCG_CALL_NO_RWG,
+   void, ptr, ptr, ptr, ptr, i32)
+DEF_HELPER_FLAGS_5(sve_fcvtzs_sd, TCG_CALL_NO_RWG,
+   void, ptr, ptr, ptr, ptr, i32)
+DEF_HELPER_FLAGS_5(sve_fcvtzs_dd, TCG_CALL_NO_RWG,
+   void, ptr, ptr, ptr, ptr, i32)
+
+DEF_HELPER_FLAGS_5(sve_fcvtzu_hh, TCG_CALL_NO_RWG,
+   void, ptr, ptr, ptr, ptr, i32)
+DEF_HELPER_FLAGS_5(sve_fcvtzu_hs, TCG_CALL_NO_RWG,
+   void, ptr, ptr, ptr, ptr, i32)
+DEF_HELPER_FLAGS_5(sve_fcvtzu_ss, TCG_CALL_NO_RWG,
+   void, ptr, ptr, ptr, ptr, i32)
+DEF_HELPER_FLAGS_5(sve_fcvtzu_ds, TCG_CALL_NO_RWG,
+   void, ptr, ptr, ptr, ptr, i32)
+DEF_HELPER_FLAGS_5(sve_fcvtzu_hd, TCG_CALL_NO_RWG,
+   void, ptr, ptr, ptr, ptr, i32)
+DEF_HELPER_FLAGS_5(sve_fcvtzu_sd, TCG_CALL_NO_RWG,
+   void, ptr, ptr, ptr, ptr, i32)
+DEF_HELPER_FLAGS_5(sve_fcvtzu_dd, TCG_CALL_NO_RWG,
+   void, ptr, ptr, ptr, ptr, i32)
+
 DEF_HELPER_FLAGS_5(sve_scvt_hh, TCG_CALL_NO_RWG,
void, ptr, ptr, ptr, ptr, i32)
 DEF_HELPER_FLAGS_5(sve_scvt_sh, TCG_CALL_NO_RWG,
diff --git a/target/arm/sve_helper.c b/target/arm/sve_helper.c
index 9db01ac2f2..09f5c77254 100644
--- a/target/arm/sve_helper.c
+++ b/target/arm/sve_helper.c
@@ -3184,6 +3184,22 @@ DO_ZPZ_FP_D(sve_fcvt_hd, uint64_t, 
float16_to_float64_ieee)
 DO_ZPZ_FP_D(sve_fcvt_ds, uint64_t, float64_to_float32)
 DO_ZPZ_FP_D(sve_fcvt_sd, uint64_t, float32_to_float64)
 
+DO_ZPZ_FP(sve_fcvtzs_hh, uint16_t, H1_2, float16_to_int16_round_to_zero)
+DO_ZPZ_FP(sve_fcvtzs_hs, uint32_t, H1_4, float16_to_int32_round_to_zero)
+DO_ZPZ_FP(sve_fcvtzs_ss, uint32_t, H1_4, float32_to_int32_round_to_zero)
+DO_ZPZ_FP_D(sve_fcvtzs_hd, uint64_t, float16_to_int64_round_to_zero)
+DO_ZPZ_FP_D(sve_fcvtzs_sd, uint64_t, float32_to_int64_round_to_zero)
+DO_ZPZ_FP_D(sve_fcvtzs_ds, uint64_t, float64_to_int32_round_to_zero)
+DO_ZPZ_FP_D(sve_fcvtzs_dd, uint64_t, float64_to_int64_round_to_zero)
+
+DO_ZPZ_FP(sve_fcvtzu_hh, uint16_t, H1_2, float16_to_uint16_round_to_zero)
+DO_ZPZ_FP(sve_fcvtzu_hs, uint32_t, H1_4, float16_to_uint32_round_to_zero)
+DO_ZPZ_FP(sve_fcvtzu_ss, uint32_t, H1_4, float32_to_uint32_round_to_zero)
+DO_ZPZ_FP_D(sve_fcvtzu_hd, uint64_t, float16_to_uint64_round_to_zero)
+DO_ZPZ_FP_D(sve_fcvtzu_sd, uint64_t, float32_to_uint64_round_to_zero)
+DO_ZPZ_FP_D(sve_fcvtzu_ds, uint64_t, float64_to_uint32_round_to_zero)
+DO_ZPZ_FP_D(sve_fcvtzu_dd, uint64_t, float64_to_uint64_round_to_zero)
+
 DO_ZPZ_FP(sve_scvt_hh, uint16_t, H1_2, int16_to_float16)
 DO_ZPZ_FP(sve_scvt_sh, uint32_t, H1_4, int32_to_float16)
 DO_ZPZ_FP(sve_scvt_ss, uint32_t, H1_4, int32_to_float32)
diff --git a/target/arm/translate-sve.c b/target/arm/translate-sve.c
index 361d545965..bc865dfd15 100644
--- a/target/arm/translate-sve.c
+++ b/target/arm/translate-sve.c
@@ -3681,6 +3681,76 @@ static void trans_FCVT_sd(DisasContext *s, arg_rpr_esz 
*a, uint32_t insn)
 do_zpz_ptr(s, a->rd, a->rn, a->pg, false, gen_helper_sve_fcvt_sd);
 }
 
+static void trans_FCVTZS_hh(DisasContext *s, arg_rpr_esz *a, uint32_t insn)
+{
+do_zpz_ptr(s, a->rd, a->rn, a->pg, true, gen_helper_sve_fcvtzs_hh);
+}
+
+static void trans_FCVTZU_hh(DisasContext *s, arg_rpr_esz *a, uint32_t insn)
+{
+do_zpz_ptr(s, a->rd, a->rn, a->pg, true, gen_helper_sve_fcvtzu_hh);
+}
+
+static void trans_FCVTZS_hs(DisasContext *s, arg_rpr_esz *a, uint32_t insn)
+{
+do_zpz_ptr(s, a->rd, a->rn, a->pg, true, gen_helper_sve_fcvtzs_hs);
+}
+
+static void trans_FCVTZU_hs(DisasContext *s, arg_rpr_esz *a, uint32_t insn)
+{
+do_zpz_ptr(s, a->rd, a->rn, a->pg, true, gen_helper_sve_fcvtzu_hs);
+}
+
+static void trans_FCVTZS_hd(DisasContext *s, arg_rpr_esz *a, uint32_t insn)
+{
+do_zpz_ptr(s, a->rd, a->rn, a->pg, true, gen_helper_sve_fcvtzs_hd);
+}
+
+static void trans_FCVTZU_hd(DisasContext *s, arg_rpr_esz 

[Qemu-devel] [PATCH v2 50/67] target/arm: Implement SVE Floating Point Accumulating Reduction Group

2018-02-17 Thread Richard Henderson
Signed-off-by: Richard Henderson 
---
 target/arm/helper-sve.h|  7 ++
 target/arm/sve_helper.c| 56 ++
 target/arm/translate-sve.c | 42 ++
 target/arm/sve.decode  |  5 +
 4 files changed, 110 insertions(+)

diff --git a/target/arm/helper-sve.h b/target/arm/helper-sve.h
index a95f077c7f..c4502256d5 100644
--- a/target/arm/helper-sve.h
+++ b/target/arm/helper-sve.h
@@ -720,6 +720,13 @@ DEF_HELPER_FLAGS_5(gvec_rsqrts_s, TCG_CALL_NO_RWG,
 DEF_HELPER_FLAGS_5(gvec_rsqrts_d, TCG_CALL_NO_RWG,
void, ptr, ptr, ptr, ptr, i32)
 
+DEF_HELPER_FLAGS_5(sve_fadda_h, TCG_CALL_NO_RWG,
+   i64, i64, ptr, ptr, ptr, i32)
+DEF_HELPER_FLAGS_5(sve_fadda_s, TCG_CALL_NO_RWG,
+   i64, i64, ptr, ptr, ptr, i32)
+DEF_HELPER_FLAGS_5(sve_fadda_d, TCG_CALL_NO_RWG,
+   i64, i64, ptr, ptr, ptr, i32)
+
 DEF_HELPER_FLAGS_6(sve_fadd_h, TCG_CALL_NO_RWG,
void, ptr, ptr, ptr, ptr, ptr, i32)
 DEF_HELPER_FLAGS_6(sve_fadd_s, TCG_CALL_NO_RWG,
diff --git a/target/arm/sve_helper.c b/target/arm/sve_helper.c
index 6622275b44..0e2b3091b0 100644
--- a/target/arm/sve_helper.c
+++ b/target/arm/sve_helper.c
@@ -2789,6 +2789,62 @@ uint32_t HELPER(sve_while)(void *vd, uint32_t count, 
uint32_t pred_desc)
 return predtest_ones(d, oprsz, esz_mask);
 }
 
+uint64_t HELPER(sve_fadda_h)(uint64_t nn, void *vm, void *vg,
+ void *status, uint32_t desc)
+{
+intptr_t i = 0, opr_sz = simd_oprsz(desc);
+float16 result = nn;
+
+do {
+uint16_t pg = *(uint16_t *)(vg + H1_2(i >> 3));
+do {
+if (pg & 1) {
+float16 mm = *(float16 *)(vm + H1_2(i));
+result = float16_add(result, mm, status);
+}
+i += sizeof(float16), pg >>= sizeof(float16);
+} while (i & 15);
+} while (i < opr_sz);
+
+return result;
+}
+
+uint64_t HELPER(sve_fadda_s)(uint64_t nn, void *vm, void *vg,
+ void *status, uint32_t desc)
+{
+intptr_t i = 0, opr_sz = simd_oprsz(desc);
+float32 result = nn;
+
+do {
+uint16_t pg = *(uint16_t *)(vg + H1_2(i >> 3));
+do {
+if (pg & 1) {
+float32 mm = *(float32 *)(vm + H1_2(i));
+result = float32_add(result, mm, status);
+}
+i += sizeof(float32), pg >>= sizeof(float32);
+} while (i & 15);
+} while (i < opr_sz);
+
+return result;
+}
+
+uint64_t HELPER(sve_fadda_d)(uint64_t nn, void *vm, void *vg,
+ void *status, uint32_t desc)
+{
+intptr_t i = 0, opr_sz = simd_oprsz(desc) / 8;
+uint64_t *m = vm;
+uint8_t *pg = vg;
+
+for (i = 0; i < opr_sz; i++) {
+if (pg[H1(i)] & 1) {
+nn = float64_add(nn, m[i], status);
+}
+}
+
+return nn;
+}
+
 /* Fully general three-operand expander, controlled by a predicate,
  * With the extra float_status parameter.
  */
diff --git a/target/arm/translate-sve.c b/target/arm/translate-sve.c
index 3124368fb5..32f0340738 100644
--- a/target/arm/translate-sve.c
+++ b/target/arm/translate-sve.c
@@ -3120,6 +3120,48 @@ DO_ZZI(UMIN, umin)
 
 #undef DO_ZZI
 
+/*
+ *** SVE Floating Point Accumulating Reduction Group
+ */
+
+static void trans_FADDA(DisasContext *s, arg_rprr_esz *a, uint32_t insn)
+{
+typedef void fadda_fn(TCGv_i64, TCGv_i64, TCGv_ptr,
+  TCGv_ptr, TCGv_ptr, TCGv_i32);
+static fadda_fn * const fns[3] = {
+gen_helper_sve_fadda_h,
+gen_helper_sve_fadda_s,
+gen_helper_sve_fadda_d,
+};
+unsigned vsz = vec_full_reg_size(s);
+TCGv_ptr t_rm, t_pg, t_fpst;
+TCGv_i64 t_val;
+TCGv_i32 t_desc;
+
+if (a->esz == 0) {
+unallocated_encoding(s);
+return;
+}
+
+t_val = load_esz(cpu_env, vec_reg_offset(s, a->rn, 0, a->esz), a->esz);
+t_rm = tcg_temp_new_ptr();
+t_pg = tcg_temp_new_ptr();
+tcg_gen_addi_ptr(t_rm, cpu_env, vec_full_reg_offset(s, a->rm));
+tcg_gen_addi_ptr(t_pg, cpu_env, pred_full_reg_offset(s, a->pg));
+t_fpst = get_fpstatus_ptr(a->esz == MO_16);
+t_desc = tcg_const_i32(simd_desc(vsz, vsz, 0));
+
+fns[a->esz - 1](t_val, t_val, t_rm, t_pg, t_fpst, t_desc);
+
+tcg_temp_free_i32(t_desc);
+tcg_temp_free_ptr(t_fpst);
+tcg_temp_free_ptr(t_pg);
+tcg_temp_free_ptr(t_rm);
+
+write_fp_dreg(s, a->rd, t_val);
+tcg_temp_free_i64(t_val);
+}
+
 /*
  *** SVE Floating Point Arithmetic - Unpredicated Group
  */
diff --git a/target/arm/sve.decode b/target/arm/sve.decode
index 817833f96e..95a290aed0 100644
--- a/target/arm/sve.decode
+++ b/target/arm/sve.decode
@@ -684,6 +684,11 @@ UMIN_zzi   00100101 .. 101 011 110  .  
@rdn_i8u
 # SVE integer multiply immediate (unpredicated)
 MUL_zzi00100101 .. 110 000 110  

[Qemu-devel] [PATCH v9 03/14] hw/arm/smmu-common: VMSAv8-64 page table walk

2018-02-17 Thread Eric Auger
This patch implements the page table walk for VMSAv8-64.

Signed-off-by: Eric Auger 

---
v8 -> v9:
- remove guest error log on PTE fetch fault
- rename  trace functions
- fix smmu_page_walk_level_res_invalid_pte last arg
- fix PTE_ADDRESS
- turn functions into macros
- make sure to return the actual pte access permission
  into tlbe->perm
- change proto of smmu_ptw*

v7 -> v8:
- rework get_pte
- use LOG_LEVEL_ERROR
- remove error checking in get_block_pte_address
- page table walk simplified (no VFIO replay anymore)
- handle PTW error events
- use dma_memory_read

v6 -> v7:
- fix wrong error handling in walk_page_table
- check perm in smmu_translate

v5 -> v6:
- use IOMMUMemoryRegion
- remove initial_lookup_level()
- fix block replay

v4 -> v5:
- add initial level in translation config
- implement block pte
- rename must_translate into nofail
- introduce call_entry_hook
- small changes to dynamic traces
- smmu_page_walk code moved from smmuv3.c to this file
- remove smmu_translate*

v3 -> v4:
- reworked page table walk to prepare for VFIO integration
  (capability to scan a range of IOVA). Same function is used
  for translate for a single iova. This is largely inspired
  from intel_iommu.c
- as the translate function was not straightforward to me,
  I tried to stick more closely to the VMSA spec.
- remove support of nested stage (kernel driver does not
  support it anyway)
- use error_report and trace events
- add aa64[] field in SMMUTransCfg
---
 hw/arm/smmu-common.c | 232 +++
 hw/arm/smmu-internal.h   |  96 ++
 hw/arm/trace-events  |  10 ++
 include/hw/arm/smmu-common.h |   6 ++
 4 files changed, 344 insertions(+)
 create mode 100644 hw/arm/smmu-internal.h

diff --git a/hw/arm/smmu-common.c b/hw/arm/smmu-common.c
index d0516dc..24cc4ba 100644
--- a/hw/arm/smmu-common.c
+++ b/hw/arm/smmu-common.c
@@ -27,6 +27,238 @@
 
 #include "qemu/error-report.h"
 #include "hw/arm/smmu-common.h"
+#include "smmu-internal.h"
+
+/* VMSAv8-64 Translation */
+
+/**
+ * get_pte - Get the content of a page table entry located t
+ * @base_addr[@index]
+ */
+static int get_pte(dma_addr_t baseaddr, uint32_t index, uint64_t *pte,
+   SMMUPTWEventInfo *info)
+{
+int ret;
+dma_addr_t addr = baseaddr + index * sizeof(*pte);
+
+ret = dma_memory_read(_space_memory, addr,
+  (uint8_t *)pte, sizeof(*pte));
+
+if (ret != MEMTX_OK) {
+info->type = SMMU_PTW_ERR_WALK_EABT;
+info->addr = addr;
+return -EINVAL;
+}
+trace_smmu_get_pte(baseaddr, index, addr, *pte);
+return 0;
+}
+
+/* VMSAv8-64 Translation Table Format Descriptor Decoding */
+
+/**
+ * get_page_pte_address - returns the L3 descriptor output address,
+ * ie. the page frame
+ * ARM ARM spec: Figure D4-17 VMSAv8-64 level 3 descriptor format
+ */
+static inline hwaddr get_page_pte_address(uint64_t pte, int granule_sz)
+{
+return PTE_ADDRESS(pte, granule_sz);
+}
+
+/**
+ * get_table_pte_address - return table descriptor output address,
+ * ie. address of next level table
+ * ARM ARM Figure D4-16 VMSAv8-64 level0, level1, and level 2 descriptor 
formats
+ */
+static inline hwaddr get_table_pte_address(uint64_t pte, int granule_sz)
+{
+return PTE_ADDRESS(pte, granule_sz);
+}
+
+/**
+ * get_block_pte_address - return block descriptor output address and block 
size
+ * ARM ARM Figure D4-16 VMSAv8-64 level0, level1, and level 2 descriptor 
formats
+ */
+static hwaddr get_block_pte_address(uint64_t pte, int level, int granule_sz,
+uint64_t *bsz)
+{
+int n = 0;
+
+switch (granule_sz) {
+case 12:
+if (level == 1) {
+n = 30;
+} else if (level == 2) {
+n = 21;
+}
+break;
+case 14:
+if (level == 2) {
+n = 25;
+}
+break;
+case 16:
+if (level == 2) {
+n = 29;
+}
+break;
+}
+if (!n) {
+error_setg(_fatal,
+   "wrong granule/level combination (%d/%d)",
+   granule_sz, level);
+}
+*bsz = 1 << n;
+return PTE_ADDRESS(pte, n);
+}
+
+static inline bool check_perm(int access_attrs, int mem_attrs)
+{
+if (((access_attrs & IOMMU_RO) && !(mem_attrs & IOMMU_RO)) ||
+((access_attrs & IOMMU_WO) && !(mem_attrs & IOMMU_WO))) {
+return false;
+}
+return true;
+}
+
+SMMUTransTableInfo *select_tt(SMMUTransCfg *cfg, dma_addr_t iova)
+{
+if (!extract64(iova, 64 - cfg->tt[0].tsz, cfg->tt[0].tsz - cfg->tbi)) {
+return >tt[0];
+}
+return >tt[1];
+}
+
+/**
+ * smmu_ptw_64 - VMSAv8-64 Walk of the page tables for a given IOVA
+ * @cfg: translation config
+ * @iova: iova to translate
+ * @perm: access type
+ * @tlbe: IOMMUTLBEntry (out)
+ * @info: handle to an error info
+ *
+ * Return 0 on success, < 0 on error. In case of error, @info is 

[Qemu-devel] [PATCH v2 60/67] target/arm: Implement SVE FP Fast Reduction Group

2018-02-17 Thread Richard Henderson
Signed-off-by: Richard Henderson 
---
 target/arm/helper-sve.h| 35 ++
 target/arm/sve_helper.c| 61 ++
 target/arm/translate-sve.c | 55 +
 target/arm/sve.decode  |  8 ++
 4 files changed, 159 insertions(+)

diff --git a/target/arm/helper-sve.h b/target/arm/helper-sve.h
index 7ada12687b..c07b2245ba 100644
--- a/target/arm/helper-sve.h
+++ b/target/arm/helper-sve.h
@@ -725,6 +725,41 @@ DEF_HELPER_FLAGS_5(gvec_rsqrts_s, TCG_CALL_NO_RWG,
 DEF_HELPER_FLAGS_5(gvec_rsqrts_d, TCG_CALL_NO_RWG,
void, ptr, ptr, ptr, ptr, i32)
 
+DEF_HELPER_FLAGS_4(sve_faddv_h, TCG_CALL_NO_RWG,
+   i64, ptr, ptr, ptr, i32)
+DEF_HELPER_FLAGS_4(sve_faddv_s, TCG_CALL_NO_RWG,
+   i64, ptr, ptr, ptr, i32)
+DEF_HELPER_FLAGS_4(sve_faddv_d, TCG_CALL_NO_RWG,
+   i64, ptr, ptr, ptr, i32)
+
+DEF_HELPER_FLAGS_4(sve_fmaxnmv_h, TCG_CALL_NO_RWG,
+   i64, ptr, ptr, ptr, i32)
+DEF_HELPER_FLAGS_4(sve_fmaxnmv_s, TCG_CALL_NO_RWG,
+   i64, ptr, ptr, ptr, i32)
+DEF_HELPER_FLAGS_4(sve_fmaxnmv_d, TCG_CALL_NO_RWG,
+   i64, ptr, ptr, ptr, i32)
+
+DEF_HELPER_FLAGS_4(sve_fminnmv_h, TCG_CALL_NO_RWG,
+   i64, ptr, ptr, ptr, i32)
+DEF_HELPER_FLAGS_4(sve_fminnmv_s, TCG_CALL_NO_RWG,
+   i64, ptr, ptr, ptr, i32)
+DEF_HELPER_FLAGS_4(sve_fminnmv_d, TCG_CALL_NO_RWG,
+   i64, ptr, ptr, ptr, i32)
+
+DEF_HELPER_FLAGS_4(sve_fmaxv_h, TCG_CALL_NO_RWG,
+   i64, ptr, ptr, ptr, i32)
+DEF_HELPER_FLAGS_4(sve_fmaxv_s, TCG_CALL_NO_RWG,
+   i64, ptr, ptr, ptr, i32)
+DEF_HELPER_FLAGS_4(sve_fmaxv_d, TCG_CALL_NO_RWG,
+   i64, ptr, ptr, ptr, i32)
+
+DEF_HELPER_FLAGS_4(sve_fminv_h, TCG_CALL_NO_RWG,
+   i64, ptr, ptr, ptr, i32)
+DEF_HELPER_FLAGS_4(sve_fminv_s, TCG_CALL_NO_RWG,
+   i64, ptr, ptr, ptr, i32)
+DEF_HELPER_FLAGS_4(sve_fminv_d, TCG_CALL_NO_RWG,
+   i64, ptr, ptr, ptr, i32)
+
 DEF_HELPER_FLAGS_5(sve_fadda_h, TCG_CALL_NO_RWG,
i64, i64, ptr, ptr, ptr, i32)
 DEF_HELPER_FLAGS_5(sve_fadda_s, TCG_CALL_NO_RWG,
diff --git a/target/arm/sve_helper.c b/target/arm/sve_helper.c
index 9378c8f0b2..29deefcd86 100644
--- a/target/arm/sve_helper.c
+++ b/target/arm/sve_helper.c
@@ -2832,6 +2832,67 @@ uint32_t HELPER(sve_while)(void *vd, uint32_t count, 
uint32_t pred_desc)
 return predtest_ones(d, oprsz, esz_mask);
 }
 
+/* Recursive reduction on a function;
+ * C.f. the ARM ARM function ReducePredicated.
+ *
+ * While it would be possible to write this without the DATA temporary,
+ * it is much simpler to process the predicate register this way.
+ * The recursion is bounded to depth 7 (128 fp16 elements), so there's
+ * little to gain with a more complex non-recursive form.
+ */
+#define DO_REDUCE(NAME, TYPE, H, FUNC, IDENT) \
+static TYPE NAME##_reduce(TYPE *data, float_status *status, uintptr_t n) \
+{ \
+if (n == 1) { \
+return *data; \
+} else {  \
+uintptr_t half = n / 2;   \
+TYPE lo = NAME##_reduce(data, status, half);  \
+TYPE hi = NAME##_reduce(data + half, status, half);   \
+return TYPE##_##FUNC(lo, hi, status); \
+} \
+} \
+uint64_t HELPER(NAME)(void *vn, void *vg, void *vs, uint32_t desc)\
+{ \
+uintptr_t i, oprsz = simd_oprsz(desc), maxsz = simd_maxsz(desc);  \
+TYPE data[sizeof(ARMVectorReg) / sizeof(TYPE)];   \
+for (i = 0; i < oprsz; ) {\
+uint16_t pg = *(uint16_t *)(vg + H1_2(i >> 3));   \
+do {  \
+TYPE nn = *(TYPE *)(vn + H(i));   \
+*(TYPE *)((void *)data + i) = (pg & 1 ? nn : IDENT);  \
+i += sizeof(TYPE), pg >>= sizeof(TYPE);   \
+} while (i & 15); \
+} \
+for (; i < maxsz; i += sizeof(TYPE)) {\
+*(TYPE *)((void *)data + i) = IDENT;  \
+} \
+return NAME##_reduce(data, vs, maxsz / 

[Qemu-devel] [PATCH v9 01/14] hw/arm/smmu-common: smmu base device and datatypes

2018-02-17 Thread Eric Auger
The patch introduces the smmu base device and class for the ARM
smmu. Devices for specific versions will be derived from this
base device.

We also introduce some important datatypes.

Signed-off-by: Eric Auger 
Signed-off-by: Prem Mallappa 

---
v8 -> v9:
- remove page walk callback type from this patch (vhost related)
- add a new hash table for caching configuration data
- add reset function
- add asid

v7 -> v8:
- add bus_num property
- add primary-bus property
- add realize and remove instance_init
- rename TYPE and related macros to match naming convention using
  for GIC
- add SMMUPageTableWalkEventInfo
- tt[2] in translation config

v3 -> v4:
- added smmu_find_as_from_bus_num
- SMMU_PCI_BUS_MAX and SMMU_PCI_DEVFN_MAX in smmu-common header
- new fields in SMMUState:
  - iommu_ops, smmu_as_by_busptr, smmu_as_by_bus_num
- add aa64[] field in SMMUTransCfg

v3:
- moved the base code in a separate patch to ease the review.
- clearer separation between base class and smmuv3 class
- translate_* only implemented as class methods

Conflicts:
default-configs/aarch64-softmmu.mak
---
 default-configs/aarch64-softmmu.mak |   1 +
 hw/arm/Makefile.objs|   1 +
 hw/arm/smmu-common.c|  80 +++
 include/hw/arm/smmu-common.h| 124 
 4 files changed, 206 insertions(+)
 create mode 100644 hw/arm/smmu-common.c
 create mode 100644 include/hw/arm/smmu-common.h

diff --git a/default-configs/aarch64-softmmu.mak 
b/default-configs/aarch64-softmmu.mak
index 9ddccf8..6f790f0 100644
--- a/default-configs/aarch64-softmmu.mak
+++ b/default-configs/aarch64-softmmu.mak
@@ -8,3 +8,4 @@ CONFIG_DDC=y
 CONFIG_DPCD=y
 CONFIG_XLNX_ZYNQMP=y
 CONFIG_XLNX_ZYNQMP_ARM=y
+CONFIG_ARM_SMMUV3=y
diff --git a/hw/arm/Makefile.objs b/hw/arm/Makefile.objs
index 1c896ba..c84c5ac 100644
--- a/hw/arm/Makefile.objs
+++ b/hw/arm/Makefile.objs
@@ -20,3 +20,4 @@ obj-$(CONFIG_FSL_IMX6) += fsl-imx6.o sabrelite.o
 obj-$(CONFIG_ASPEED_SOC) += aspeed_soc.o aspeed.o
 obj-$(CONFIG_MPS2) += mps2.o
 obj-$(CONFIG_MSF2) += msf2-soc.o msf2-som.o
+obj-$(CONFIG_ARM_SMMUV3) += smmu-common.o
diff --git a/hw/arm/smmu-common.c b/hw/arm/smmu-common.c
new file mode 100644
index 000..86a5aab
--- /dev/null
+++ b/hw/arm/smmu-common.c
@@ -0,0 +1,80 @@
+/*
+ * Copyright (C) 2014-2016 Broadcom Corporation
+ * Copyright (c) 2017 Red Hat, Inc.
+ * Written by Prem Mallappa, Eric Auger
+ *
+ * This program is free software; you can redistribute it and/or modify
+ * it under the terms of the GNU General Public License version 2 as
+ * published by the Free Software Foundation.
+ *
+ * This program is distributed in the hope that it will be useful,
+ * but WITHOUT ANY WARRANTY; without even the implied warranty of
+ * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the
+ * GNU General Public License for more details.
+ *
+ * Author: Prem Mallappa 
+ *
+ */
+
+#include "qemu/osdep.h"
+#include "sysemu/sysemu.h"
+#include "exec/address-spaces.h"
+#include "trace.h"
+#include "exec/target_page.h"
+#include "qom/cpu.h"
+#include "hw/qdev-properties.h"
+#include "qapi/error.h"
+
+#include "qemu/error-report.h"
+#include "hw/arm/smmu-common.h"
+
+static void smmu_base_realize(DeviceState *dev, Error **errp)
+{
+SMMUState *s = ARM_SMMU(dev);
+
+s->configs = g_hash_table_new_full(NULL, NULL, NULL, g_free);
+s->iotlb = g_hash_table_new_full(NULL, NULL, NULL, g_free);
+}
+
+static void smmu_base_reset(DeviceState *dev)
+{
+SMMUState *s = ARM_SMMU(dev);
+
+g_hash_table_remove_all(s->configs);
+g_hash_table_remove_all(s->iotlb);
+}
+
+static Property smmu_dev_properties[] = {
+DEFINE_PROP_UINT8("bus_num", SMMUState, bus_num, 0),
+DEFINE_PROP_LINK("primary-bus", SMMUState, primary_bus, "PCI", PCIBus *),
+DEFINE_PROP_END_OF_LIST(),
+};
+
+static void smmu_base_class_init(ObjectClass *klass, void *data)
+{
+DeviceClass *dc = DEVICE_CLASS(klass);
+SMMUBaseClass *sbc = ARM_SMMU_CLASS(klass);
+
+dc->props = smmu_dev_properties;
+sbc->parent_realize = dc->realize;
+dc->realize = smmu_base_realize;
+dc->reset = smmu_base_reset;
+}
+
+static const TypeInfo smmu_base_info = {
+.name  = TYPE_ARM_SMMU,
+.parent= TYPE_SYS_BUS_DEVICE,
+.instance_size = sizeof(SMMUState),
+.class_data= NULL,
+.class_size= sizeof(SMMUBaseClass),
+.class_init= smmu_base_class_init,
+.abstract  = true,
+};
+
+static void smmu_base_register_types(void)
+{
+type_register_static(_base_info);
+}
+
+type_init(smmu_base_register_types)
+
diff --git a/include/hw/arm/smmu-common.h b/include/hw/arm/smmu-common.h
new file mode 100644
index 000..8a9d931
--- /dev/null
+++ b/include/hw/arm/smmu-common.h
@@ -0,0 +1,124 @@
+/*
+ * ARM SMMU Support
+ *
+ * Copyright (C) 2015-2016 Broadcom Corporation
+ * Copyright (c) 2017 Red Hat, Inc.
+ * 

[Qemu-devel] [PATCH v2 49/67] target/arm: Implement SVE FP Multiply-Add Group

2018-02-17 Thread Richard Henderson
Signed-off-by: Richard Henderson 
---
 target/arm/helper-sve.h| 16 ++
 target/arm/sve_helper.c| 53 ++
 target/arm/translate-sve.c | 41 +++
 target/arm/sve.decode  | 17 +++
 4 files changed, 127 insertions(+)

diff --git a/target/arm/helper-sve.h b/target/arm/helper-sve.h
index 84d0a8978c..a95f077c7f 100644
--- a/target/arm/helper-sve.h
+++ b/target/arm/helper-sve.h
@@ -827,6 +827,22 @@ DEF_HELPER_FLAGS_5(sve_ucvt_ds, TCG_CALL_NO_RWG,
 DEF_HELPER_FLAGS_5(sve_ucvt_dd, TCG_CALL_NO_RWG,
void, ptr, ptr, ptr, ptr, i32)
 
+DEF_HELPER_FLAGS_3(sve_fmla_zpzzz_h, TCG_CALL_NO_RWG, void, env, ptr, i32)
+DEF_HELPER_FLAGS_3(sve_fmla_zpzzz_s, TCG_CALL_NO_RWG, void, env, ptr, i32)
+DEF_HELPER_FLAGS_3(sve_fmla_zpzzz_d, TCG_CALL_NO_RWG, void, env, ptr, i32)
+
+DEF_HELPER_FLAGS_3(sve_fmls_zpzzz_h, TCG_CALL_NO_RWG, void, env, ptr, i32)
+DEF_HELPER_FLAGS_3(sve_fmls_zpzzz_s, TCG_CALL_NO_RWG, void, env, ptr, i32)
+DEF_HELPER_FLAGS_3(sve_fmls_zpzzz_d, TCG_CALL_NO_RWG, void, env, ptr, i32)
+
+DEF_HELPER_FLAGS_3(sve_fnmla_zpzzz_h, TCG_CALL_NO_RWG, void, env, ptr, i32)
+DEF_HELPER_FLAGS_3(sve_fnmla_zpzzz_s, TCG_CALL_NO_RWG, void, env, ptr, i32)
+DEF_HELPER_FLAGS_3(sve_fnmla_zpzzz_d, TCG_CALL_NO_RWG, void, env, ptr, i32)
+
+DEF_HELPER_FLAGS_3(sve_fnmls_zpzzz_h, TCG_CALL_NO_RWG, void, env, ptr, i32)
+DEF_HELPER_FLAGS_3(sve_fnmls_zpzzz_s, TCG_CALL_NO_RWG, void, env, ptr, i32)
+DEF_HELPER_FLAGS_3(sve_fnmls_zpzzz_d, TCG_CALL_NO_RWG, void, env, ptr, i32)
+
 DEF_HELPER_FLAGS_4(sve_ld1bb_r, TCG_CALL_NO_WG, void, env, ptr, tl, i32)
 DEF_HELPER_FLAGS_4(sve_ld2bb_r, TCG_CALL_NO_WG, void, env, ptr, tl, i32)
 DEF_HELPER_FLAGS_4(sve_ld3bb_r, TCG_CALL_NO_WG, void, env, ptr, tl, i32)
diff --git a/target/arm/sve_helper.c b/target/arm/sve_helper.c
index d80babfae7..6622275b44 100644
--- a/target/arm/sve_helper.c
+++ b/target/arm/sve_helper.c
@@ -2948,6 +2948,59 @@ DO_ZPZ_FP_D(sve_ucvt_dd, uint64_t, uint64_to_float64)
 #undef DO_ZPZ_FP
 #undef DO_ZPZ_FP_D
 
+/* 4-operand predicated multiply-add.  This requires 7 operands to pass
+ * "properly", so we need to encode some of the registers into DESC.
+ */
+QEMU_BUILD_BUG_ON(SIMD_DATA_SHIFT + 20 > 32);
+
+#define DO_FMLA(NAME, N, H, NEG1, NEG3) \
+void HELPER(NAME)(CPUARMState *env, void *vg, uint32_t desc)\
+{   \
+intptr_t i = 0, opr_sz = simd_oprsz(desc);  \
+unsigned rd = extract32(desc, SIMD_DATA_SHIFT, 5);  \
+unsigned rn = extract32(desc, SIMD_DATA_SHIFT + 5, 5);  \
+unsigned rm = extract32(desc, SIMD_DATA_SHIFT + 10, 5); \
+unsigned ra = extract32(desc, SIMD_DATA_SHIFT + 15, 5); \
+void *vd = >vfp.zregs[rd]; \
+void *vn = >vfp.zregs[rn]; \
+void *vm = >vfp.zregs[rm]; \
+void *va = >vfp.zregs[ra]; \
+do {\
+uint16_t pg = *(uint16_t *)(vg + H1_2(i >> 3)); \
+do {\
+if (likely(pg & 1)) {   \
+float##N e1 = *(uint##N##_t *)(vn + H(i));  \
+float##N e2 = *(uint##N##_t *)(vm + H(i));  \
+float##N e3 = *(uint##N##_t *)(va + H(i));  \
+float##N r; \
+if (NEG1) e1 = float##N##_chs(e1);  \
+if (NEG3) e3 = float##N##_chs(e3);  \
+r = float##N##_muladd(e1, e2, e3, 0, >vfp.fp_status);  \
+*(uint##N##_t *)(vd + H(i)) = r;\
+}   \
+i += sizeof(float##N), pg >>= sizeof(float##N); \
+} while (i & 15);   \
+} while (i < opr_sz);   \
+}
+
+DO_FMLA(sve_fmla_zpzzz_h, 16, H1_2, 0, 0)
+DO_FMLA(sve_fmla_zpzzz_s, 32, H1_4, 0, 0)
+DO_FMLA(sve_fmla_zpzzz_d, 64, , 0, 0)
+
+DO_FMLA(sve_fmls_zpzzz_h, 16, H1_2, 0, 1)
+DO_FMLA(sve_fmls_zpzzz_s, 32, H1_4, 0, 1)
+DO_FMLA(sve_fmls_zpzzz_d, 64, , 0, 1)
+
+DO_FMLA(sve_fnmla_zpzzz_h, 16, H1_2, 1, 0)
+DO_FMLA(sve_fnmla_zpzzz_s, 32, H1_4, 1, 0)
+DO_FMLA(sve_fnmla_zpzzz_d, 64, , 1, 0)
+
+DO_FMLA(sve_fnmls_zpzzz_h, 16, H1_2, 1, 1)
+DO_FMLA(sve_fnmls_zpzzz_s, 32, H1_4, 1, 1)
+DO_FMLA(sve_fnmls_zpzzz_d, 

[Qemu-devel] [PATCH v2 46/67] target/arm: Implement SVE load and broadcast quadword

2018-02-17 Thread Richard Henderson
Signed-off-by: Richard Henderson 
---
 target/arm/translate-sve.c | 51 ++
 target/arm/sve.decode  |  9 
 2 files changed, 60 insertions(+)

diff --git a/target/arm/translate-sve.c b/target/arm/translate-sve.c
index fda9a56fd5..7b21102b7e 100644
--- a/target/arm/translate-sve.c
+++ b/target/arm/translate-sve.c
@@ -3398,6 +3398,57 @@ static void trans_LDNF1_zpri(DisasContext *s, 
arg_rpri_load *a, uint32_t insn)
 trans_LD_zpri(s, a, insn);
 }
 
+static void do_ldrq(DisasContext *s, int zt, int pg, TCGv_i64 addr, int msz)
+{
+static gen_helper_gvec_mem * const fns[4] = {
+gen_helper_sve_ld1bb_r, gen_helper_sve_ld1hh_r,
+gen_helper_sve_ld1ss_r, gen_helper_sve_ld1dd_r,
+};
+unsigned vsz = vec_full_reg_size(s);
+TCGv_ptr t_pg;
+TCGv_i32 desc;
+
+/* Load the first quadword using the normal predicated load helpers.  */
+desc = tcg_const_i32(simd_desc(16, 16, zt));
+t_pg = tcg_temp_new_ptr();
+
+tcg_gen_addi_ptr(t_pg, cpu_env, pred_full_reg_offset(s, pg));
+fns[msz](cpu_env, t_pg, addr, desc);
+
+tcg_temp_free_ptr(t_pg);
+tcg_temp_free_i32(desc);
+
+/* Replicate that first quadword.  */
+if (vsz > 16) {
+unsigned dofs = vec_full_reg_offset(s, zt);
+tcg_gen_gvec_dup_mem(4, dofs + 16, dofs, vsz - 16, vsz - 16);
+}
+}
+
+static void trans_LD1RQ_zprr(DisasContext *s, arg_rprr_load *a, uint32_t insn)
+{
+TCGv_i64 addr;
+int msz = dtype_msz(a->dtype);
+
+if (a->rm == 31) {
+unallocated_encoding(s);
+return;
+}
+
+addr = new_tmp_a64(s);
+tcg_gen_shli_i64(addr, cpu_reg(s, a->rm), msz);
+tcg_gen_add_i64(addr, addr, cpu_reg_sp(s, a->rn));
+do_ldrq(s, a->rd, a->pg, addr, msz);
+}
+
+static void trans_LD1RQ_zpri(DisasContext *s, arg_rpri_load *a, uint32_t insn)
+{
+TCGv_i64 addr = new_tmp_a64(s);
+
+tcg_gen_addi_i64(addr, cpu_reg_sp(s, a->rn), a->imm * 16);
+do_ldrq(s, a->rd, a->pg, addr, dtype_msz(a->dtype));
+}
+
 static void do_st_zpa(DisasContext *s, int zt, int pg, TCGv_i64 addr,
   int msz, int esz, int nreg)
 {
diff --git a/target/arm/sve.decode b/target/arm/sve.decode
index 41b8cd8746..6c906e25e9 100644
--- a/target/arm/sve.decode
+++ b/target/arm/sve.decode
@@ -723,6 +723,15 @@ LD_zprr1010010 .. nreg:2 . 110 ... . 
. @rprr_load_msz
 # LD2B, LD2H, LD2W, LD2D; etc.
 LD_zpri1010010 .. nreg:2 0 111 ... . . 
@rpri_load_msz
 
+# SVE load and broadcast quadword (scalar plus scalar)
+LD1RQ_zprr 1010010 .. 00 . 000 ... . . \
+   @rprr_load_msz nreg=0
+
+# SVE load and broadcast quadword (scalar plus immediate)
+# LD1RQB, LD1RQH, LD1RQS, LD1RQD
+LD1RQ_zpri 1010010 .. 00 0 001 ... . . \
+   @rpri_load_msz nreg=0
+
 ### SVE Memory Store Group
 
 # SVE contiguous store (scalar plus immediate)
-- 
2.14.3




[Qemu-devel] [PATCH v2 51/67] target/arm: Implement SVE load and broadcast element

2018-02-17 Thread Richard Henderson
Signed-off-by: Richard Henderson 
---
 target/arm/helper-sve.h|  5 +
 target/arm/sve_helper.c| 43 
 target/arm/translate-sve.c | 55 +-
 target/arm/sve.decode  |  5 +
 4 files changed, 107 insertions(+), 1 deletion(-)

diff --git a/target/arm/helper-sve.h b/target/arm/helper-sve.h
index c4502256d5..6c640a92ff 100644
--- a/target/arm/helper-sve.h
+++ b/target/arm/helper-sve.h
@@ -274,6 +274,11 @@ DEF_HELPER_FLAGS_3(sve_clr_h, TCG_CALL_NO_RWG, void, ptr, 
ptr, i32)
 DEF_HELPER_FLAGS_3(sve_clr_s, TCG_CALL_NO_RWG, void, ptr, ptr, i32)
 DEF_HELPER_FLAGS_3(sve_clr_d, TCG_CALL_NO_RWG, void, ptr, ptr, i32)
 
+DEF_HELPER_FLAGS_3(sve_clri_b, TCG_CALL_NO_RWG, void, ptr, ptr, i32)
+DEF_HELPER_FLAGS_3(sve_clri_h, TCG_CALL_NO_RWG, void, ptr, ptr, i32)
+DEF_HELPER_FLAGS_3(sve_clri_s, TCG_CALL_NO_RWG, void, ptr, ptr, i32)
+DEF_HELPER_FLAGS_3(sve_clri_d, TCG_CALL_NO_RWG, void, ptr, ptr, i32)
+
 DEF_HELPER_FLAGS_4(sve_asr_zpzi_b, TCG_CALL_NO_RWG, void, ptr, ptr, ptr, i32)
 DEF_HELPER_FLAGS_4(sve_asr_zpzi_h, TCG_CALL_NO_RWG, void, ptr, ptr, ptr, i32)
 DEF_HELPER_FLAGS_4(sve_asr_zpzi_s, TCG_CALL_NO_RWG, void, ptr, ptr, ptr, i32)
diff --git a/target/arm/sve_helper.c b/target/arm/sve_helper.c
index 0e2b3091b0..a7dc6f6164 100644
--- a/target/arm/sve_helper.c
+++ b/target/arm/sve_helper.c
@@ -994,6 +994,49 @@ void HELPER(sve_clr_d)(void *vd, void *vg, uint32_t desc)
 }
 }
 
+/* Store zero into every inactive element of Zd.  */
+void HELPER(sve_clri_b)(void *vd, void *vg, uint32_t desc)
+{
+intptr_t i, opr_sz = simd_oprsz(desc) / 8;
+uint64_t *d = vd;
+uint8_t *pg = vg;
+for (i = 0; i < opr_sz; i += 1) {
+d[i] &= expand_pred_b(pg[H1(i)]);
+}
+}
+
+void HELPER(sve_clri_h)(void *vd, void *vg, uint32_t desc)
+{
+intptr_t i, opr_sz = simd_oprsz(desc) / 8;
+uint64_t *d = vd;
+uint8_t *pg = vg;
+for (i = 0; i < opr_sz; i += 1) {
+d[i] &= expand_pred_h(pg[H1(i)]);
+}
+}
+
+void HELPER(sve_clri_s)(void *vd, void *vg, uint32_t desc)
+{
+intptr_t i, opr_sz = simd_oprsz(desc) / 8;
+uint64_t *d = vd;
+uint8_t *pg = vg;
+for (i = 0; i < opr_sz; i += 1) {
+d[i] &= expand_pred_s(pg[H1(i)]);
+}
+}
+
+void HELPER(sve_clri_d)(void *vd, void *vg, uint32_t desc)
+{
+intptr_t i, opr_sz = simd_oprsz(desc) / 8;
+uint64_t *d = vd;
+uint8_t *pg = vg;
+for (i = 0; i < opr_sz; i += 1) {
+if (!(pg[H1(i)] & 1)) {
+d[i] = 0;
+}
+}
+}
+
 /* Three-operand expander, immediate operand, controlled by a predicate.
  */
 #define DO_ZPZI(NAME, TYPE, H, OP)  \
diff --git a/target/arm/translate-sve.c b/target/arm/translate-sve.c
index 32f0340738..b000a2482e 100644
--- a/target/arm/translate-sve.c
+++ b/target/arm/translate-sve.c
@@ -584,6 +584,19 @@ static void do_clr_zp(DisasContext *s, int rd, int pg, int 
esz)
vsz, vsz, 0, fns[esz]);
 }
 
+/* Store zero into every inactive element of Zd.  */
+static void do_clr_inactive_zp(DisasContext *s, int rd, int pg, int esz)
+{
+static gen_helper_gvec_2 * const fns[4] = {
+gen_helper_sve_clri_b, gen_helper_sve_clri_h,
+gen_helper_sve_clri_s, gen_helper_sve_clri_d,
+};
+unsigned vsz = vec_full_reg_size(s);
+tcg_gen_gvec_2_ool(vec_full_reg_offset(s, rd),
+   pred_full_reg_offset(s, pg),
+   vsz, vsz, 0, fns[esz]);
+}
+
 static void do_zpzi_ool(DisasContext *s, arg_rpri_esz *a,
 gen_helper_gvec_3 *fn)
 {
@@ -3506,7 +3519,7 @@ static void trans_LDR_pri(DisasContext *s, arg_rri *a, 
uint32_t insn)
  *** SVE Memory - Contiguous Load Group
  */
 
-/* The memory element size of dtype.  */
+/* The memory mode of the dtype.  */
 static const TCGMemOp dtype_mop[16] = {
 MO_UB, MO_UB, MO_UB, MO_UB,
 MO_SL, MO_UW, MO_UW, MO_UW,
@@ -3671,6 +3684,46 @@ static void trans_LD1RQ_zpri(DisasContext *s, 
arg_rpri_load *a, uint32_t insn)
 do_ldrq(s, a->rd, a->pg, addr, dtype_msz(a->dtype));
 }
 
+/* Load and broadcast element.  */
+static void trans_LD1R_zpri(DisasContext *s, arg_rpri_load *a, uint32_t insn)
+{
+unsigned vsz = vec_full_reg_size(s);
+unsigned psz = pred_full_reg_size(s);
+unsigned esz = dtype_esz[a->dtype];
+TCGLabel *over = gen_new_label();
+TCGv_i64 temp;
+
+/* If the guarding predicate has no bits set, no load occurs.  */
+if (psz <= 8) {
+temp = tcg_temp_new_i64();
+tcg_gen_ld_i64(temp, cpu_env, pred_full_reg_offset(s, a->pg));
+tcg_gen_andi_i64(temp, temp,
+ deposit64(0, 0, psz * 8, pred_esz_masks[esz]));
+tcg_gen_brcondi_i64(TCG_COND_EQ, temp, 0, over);
+tcg_temp_free_i64(temp);
+} else {
+TCGv_i32 t32 = tcg_temp_new_i32();
+find_last_active(s, t32, esz, a->pg);
+

[Qemu-devel] [PATCH v2 62/67] target/arm: Implement SVE FP Compare with Zero Group

2018-02-17 Thread Richard Henderson
Signed-off-by: Richard Henderson 
---
 target/arm/helper-sve.h| 42 ++
 target/arm/sve_helper.c| 45 +
 target/arm/translate-sve.c | 41 +
 target/arm/sve.decode  | 10 ++
 4 files changed, 138 insertions(+)

diff --git a/target/arm/helper-sve.h b/target/arm/helper-sve.h
index c07b2245ba..696c97648b 100644
--- a/target/arm/helper-sve.h
+++ b/target/arm/helper-sve.h
@@ -767,6 +767,48 @@ DEF_HELPER_FLAGS_5(sve_fadda_s, TCG_CALL_NO_RWG,
 DEF_HELPER_FLAGS_5(sve_fadda_d, TCG_CALL_NO_RWG,
i64, i64, ptr, ptr, ptr, i32)
 
+DEF_HELPER_FLAGS_5(sve_fcmge0_h, TCG_CALL_NO_RWG,
+   void, ptr, ptr, ptr, ptr, i32)
+DEF_HELPER_FLAGS_5(sve_fcmge0_s, TCG_CALL_NO_RWG,
+   void, ptr, ptr, ptr, ptr, i32)
+DEF_HELPER_FLAGS_5(sve_fcmge0_d, TCG_CALL_NO_RWG,
+   void, ptr, ptr, ptr, ptr, i32)
+
+DEF_HELPER_FLAGS_5(sve_fcmgt0_h, TCG_CALL_NO_RWG,
+   void, ptr, ptr, ptr, ptr, i32)
+DEF_HELPER_FLAGS_5(sve_fcmgt0_s, TCG_CALL_NO_RWG,
+   void, ptr, ptr, ptr, ptr, i32)
+DEF_HELPER_FLAGS_5(sve_fcmgt0_d, TCG_CALL_NO_RWG,
+   void, ptr, ptr, ptr, ptr, i32)
+
+DEF_HELPER_FLAGS_5(sve_fcmlt0_h, TCG_CALL_NO_RWG,
+   void, ptr, ptr, ptr, ptr, i32)
+DEF_HELPER_FLAGS_5(sve_fcmlt0_s, TCG_CALL_NO_RWG,
+   void, ptr, ptr, ptr, ptr, i32)
+DEF_HELPER_FLAGS_5(sve_fcmlt0_d, TCG_CALL_NO_RWG,
+   void, ptr, ptr, ptr, ptr, i32)
+
+DEF_HELPER_FLAGS_5(sve_fcmle0_h, TCG_CALL_NO_RWG,
+   void, ptr, ptr, ptr, ptr, i32)
+DEF_HELPER_FLAGS_5(sve_fcmle0_s, TCG_CALL_NO_RWG,
+   void, ptr, ptr, ptr, ptr, i32)
+DEF_HELPER_FLAGS_5(sve_fcmle0_d, TCG_CALL_NO_RWG,
+   void, ptr, ptr, ptr, ptr, i32)
+
+DEF_HELPER_FLAGS_5(sve_fcmeq0_h, TCG_CALL_NO_RWG,
+   void, ptr, ptr, ptr, ptr, i32)
+DEF_HELPER_FLAGS_5(sve_fcmeq0_s, TCG_CALL_NO_RWG,
+   void, ptr, ptr, ptr, ptr, i32)
+DEF_HELPER_FLAGS_5(sve_fcmeq0_d, TCG_CALL_NO_RWG,
+   void, ptr, ptr, ptr, ptr, i32)
+
+DEF_HELPER_FLAGS_5(sve_fcmne0_h, TCG_CALL_NO_RWG,
+   void, ptr, ptr, ptr, ptr, i32)
+DEF_HELPER_FLAGS_5(sve_fcmne0_s, TCG_CALL_NO_RWG,
+   void, ptr, ptr, ptr, ptr, i32)
+DEF_HELPER_FLAGS_5(sve_fcmne0_d, TCG_CALL_NO_RWG,
+   void, ptr, ptr, ptr, ptr, i32)
+
 DEF_HELPER_FLAGS_6(sve_fadd_h, TCG_CALL_NO_RWG,
void, ptr, ptr, ptr, ptr, ptr, i32)
 DEF_HELPER_FLAGS_6(sve_fadd_s, TCG_CALL_NO_RWG,
diff --git a/target/arm/sve_helper.c b/target/arm/sve_helper.c
index 29deefcd86..6a052ce9ad 100644
--- a/target/arm/sve_helper.c
+++ b/target/arm/sve_helper.c
@@ -3270,6 +3270,8 @@ void HELPER(NAME)(void *vd, void *vn, void *vm, void *vg, 
  \
 
 #define DO_FCMGE(TYPE, X, Y, ST)  TYPE##_compare(Y, X, ST) <= 0
 #define DO_FCMGT(TYPE, X, Y, ST)  TYPE##_compare(Y, X, ST) < 0
+#define DO_FCMLE(TYPE, X, Y, ST)  TYPE##_compare(X, Y, ST) <= 0
+#define DO_FCMLT(TYPE, X, Y, ST)  TYPE##_compare(X, Y, ST) < 0
 #define DO_FCMEQ(TYPE, X, Y, ST)  TYPE##_compare_quiet(X, Y, ST) == 0
 #define DO_FCMNE(TYPE, X, Y, ST)  TYPE##_compare_quiet(X, Y, ST) != 0
 #define DO_FCMUO(TYPE, X, Y, ST)  \
@@ -3293,6 +3295,49 @@ DO_FPCMP_PPZZ_ALL(sve_facgt, DO_FACGT)
 #undef DO_FPCMP_PPZZ_H
 #undef DO_FPCMP_PPZZ
 
+/* One operand floating-point comparison against zero, controlled
+ * by a predicate.
+ */
+#define DO_FPCMP_PPZ0(NAME, TYPE, H, OP)   \
+void HELPER(NAME)(void *vd, void *vn, void *vg,\
+  void *status, uint32_t desc) \
+{  \
+intptr_t opr_sz = simd_oprsz(desc);\
+intptr_t i = opr_sz, j = ((opr_sz - 1) & -64) >> 3;\
+do {   \
+uint64_t out = 0;  \
+uint64_t pg = *(uint64_t *)(vg + j);   \
+do {   \
+i -= sizeof(TYPE), out <<= sizeof(TYPE);   \
+if ((pg >> (i & 63)) & 1) {\
+TYPE nn = *(TYPE *)(vn + H(i));\
+out |= OP(TYPE, nn, 0, status);\
+}  \
+} while (i & 63);  \
+*(uint64_t *)(vd + j) = out;   \
+j -= 8;\
+} while (i > 0);   \
+}
+
+#define DO_FPCMP_PPZ0_H(NAME, OP) \
+DO_FPCMP_PPZ0(NAME##_h, float16, H1_2, OP)
+#define DO_FPCMP_PPZ0_S(NAME, OP) \
+DO_FPCMP_PPZ0(NAME##_s, float32, H1_4, OP)
+#define DO_FPCMP_PPZ0_D(NAME, OP) \

Re: [Qemu-devel] [PATCH 3/3] arm/vexpress: Add proper display connector emulation

2018-02-17 Thread Philippe Mathieu-Daudé
Hi Linus,

On 02/17/2018 11:00 AM, Linus Walleij wrote:
> This adds the SiI9022 and EDID I2C devices to the ARM Versatile
> Express machine, and selects the two I2C devices necessary in the
> arm-softmmy.mak configuration so everything will build smoothly.
> 
> I am implementing proper handling of the graphics in the Linux
> kernel and adding proper emulation of SiI9022 and EDID makes the
> driver probe as nicely as before, retrieveing the resolutions
> supported by the "QEMU monitor" and overall just working nice.
> 
> The assignment of the SiI9022 at address 0x39 and the EDID
> DDC I2C at address 0x50 is not strictly correct: the DDC I2C
> is there all the time but in the actual component it only
> appears once activated inside the SiI9022, so ideally it should
> be added and removed to the bus by the SiI9022. However for this
> purpose it works fine to just have it around.

This seems easier to just do it now rather than postpone :)

In your patch #2:

static void sii9022_realize(DeviceState *dev, Error **errp)
{
I2CBus *bus;

bus = I2C_BUS(qdev_get_parent_bus(dev));
i2c_create_slave(bus, TYPE_I2CDDC, 0x50);
}

static void sii9022_class_init(ObjectClass *klass, void *data)
{
DeviceClass *dc = DEVICE_CLASS(klass);

...
dc->realize = sii9022_realize;
}

> 
> Signed-off-by: Linus Walleij 
> ---
>  default-configs/arm-softmmu.mak | 2 ++
>  hw/arm/vexpress.c   | 7 ++-
>  2 files changed, 8 insertions(+), 1 deletion(-)
> 
> diff --git a/default-configs/arm-softmmu.mak b/default-configs/arm-softmmu.mak
> index ca34cf446242..54f855d07206 100644
> --- a/default-configs/arm-softmmu.mak
> +++ b/default-configs/arm-softmmu.mak
> @@ -21,6 +21,8 @@ CONFIG_STELLARIS_INPUT=y
>  CONFIG_STELLARIS_ENET=y
>  CONFIG_SSD0303=y
>  CONFIG_SSD0323=y
> +CONFIG_DDC=y
> +CONFIG_SII9022=y
>  CONFIG_ADS7846=y
>  CONFIG_MAX111X=y
>  CONFIG_SSI=y
> diff --git a/hw/arm/vexpress.c b/hw/arm/vexpress.c
> index dc5928ae1ab5..d6c912c97684 100644
> --- a/hw/arm/vexpress.c
> +++ b/hw/arm/vexpress.c
> @@ -29,6 +29,7 @@
>  #include "hw/arm/arm.h"
>  #include "hw/arm/primecell.h"
>  #include "hw/devices.h"
> +#include "hw/i2c/i2c.h"
>  #include "net/net.h"
>  #include "sysemu/sysemu.h"
>  #include "hw/boards.h"
> @@ -537,6 +538,7 @@ static void vexpress_common_init(MachineState *machine)
>  uint32_t sys_id;
>  DriveInfo *dinfo;
>  pflash_t *pflash0;
> +I2CBus *i2c;
>  ram_addr_t vram_size, sram_size;
>  MemoryRegion *sysmem = get_system_memory();
>  MemoryRegion *vram = g_new(MemoryRegion, 1);
> @@ -628,7 +630,10 @@ static void vexpress_common_init(MachineState *machine)
>  sysbus_create_simple("sp804", map[VE_TIMER01], pic[2]);
>  sysbus_create_simple("sp804", map[VE_TIMER23], pic[3]);
>  
> -/* VE_SERIALDVI: not modelled */
> +dev = sysbus_create_simple("versatile_i2c", map[VE_SERIALDVI], NULL);
> +i2c = (I2CBus *)qdev_get_child_bus(dev, "i2c");
> +i2c_create_slave(i2c, "sii9022", 0x39);
> +i2c_create_slave(i2c, "i2c-ddc", 0x50);
>  
>  sysbus_create_simple("pl031", map[VE_RTC], pic[4]); /* RTC */
>  
> 



[Qemu-devel] [PATCH v2 45/67] target/arm: Implement SVE Memory Contiguous Store Group

2018-02-17 Thread Richard Henderson
Signed-off-by: Richard Henderson 
---
 target/arm/helper-sve.h|  29 +++
 target/arm/sve_helper.c| 211 +
 target/arm/translate-sve.c |  68 ++-
 target/arm/sve.decode  |  38 
 4 files changed, 343 insertions(+), 3 deletions(-)

diff --git a/target/arm/helper-sve.h b/target/arm/helper-sve.h
index fcc9ba5f50..74c2d642a3 100644
--- a/target/arm/helper-sve.h
+++ b/target/arm/helper-sve.h
@@ -754,3 +754,32 @@ DEF_HELPER_FLAGS_4(sve_ld1hds_r, TCG_CALL_NO_WG, void, 
env, ptr, tl, i32)
 
 DEF_HELPER_FLAGS_4(sve_ld1sdu_r, TCG_CALL_NO_WG, void, env, ptr, tl, i32)
 DEF_HELPER_FLAGS_4(sve_ld1sds_r, TCG_CALL_NO_WG, void, env, ptr, tl, i32)
+
+DEF_HELPER_FLAGS_4(sve_st1bb_r, TCG_CALL_NO_WG, void, env, ptr, tl, i32)
+DEF_HELPER_FLAGS_4(sve_st2bb_r, TCG_CALL_NO_WG, void, env, ptr, tl, i32)
+DEF_HELPER_FLAGS_4(sve_st3bb_r, TCG_CALL_NO_WG, void, env, ptr, tl, i32)
+DEF_HELPER_FLAGS_4(sve_st4bb_r, TCG_CALL_NO_WG, void, env, ptr, tl, i32)
+
+DEF_HELPER_FLAGS_4(sve_st1hh_r, TCG_CALL_NO_WG, void, env, ptr, tl, i32)
+DEF_HELPER_FLAGS_4(sve_st2hh_r, TCG_CALL_NO_WG, void, env, ptr, tl, i32)
+DEF_HELPER_FLAGS_4(sve_st3hh_r, TCG_CALL_NO_WG, void, env, ptr, tl, i32)
+DEF_HELPER_FLAGS_4(sve_st4hh_r, TCG_CALL_NO_WG, void, env, ptr, tl, i32)
+
+DEF_HELPER_FLAGS_4(sve_st1ss_r, TCG_CALL_NO_WG, void, env, ptr, tl, i32)
+DEF_HELPER_FLAGS_4(sve_st2ss_r, TCG_CALL_NO_WG, void, env, ptr, tl, i32)
+DEF_HELPER_FLAGS_4(sve_st3ss_r, TCG_CALL_NO_WG, void, env, ptr, tl, i32)
+DEF_HELPER_FLAGS_4(sve_st4ss_r, TCG_CALL_NO_WG, void, env, ptr, tl, i32)
+
+DEF_HELPER_FLAGS_4(sve_st1dd_r, TCG_CALL_NO_WG, void, env, ptr, tl, i32)
+DEF_HELPER_FLAGS_4(sve_st2dd_r, TCG_CALL_NO_WG, void, env, ptr, tl, i32)
+DEF_HELPER_FLAGS_4(sve_st3dd_r, TCG_CALL_NO_WG, void, env, ptr, tl, i32)
+DEF_HELPER_FLAGS_4(sve_st4dd_r, TCG_CALL_NO_WG, void, env, ptr, tl, i32)
+
+DEF_HELPER_FLAGS_4(sve_st1bh_r, TCG_CALL_NO_WG, void, env, ptr, tl, i32)
+DEF_HELPER_FLAGS_4(sve_st1bs_r, TCG_CALL_NO_WG, void, env, ptr, tl, i32)
+DEF_HELPER_FLAGS_4(sve_st1bd_r, TCG_CALL_NO_WG, void, env, ptr, tl, i32)
+
+DEF_HELPER_FLAGS_4(sve_st1hs_r, TCG_CALL_NO_WG, void, env, ptr, tl, i32)
+DEF_HELPER_FLAGS_4(sve_st1hd_r, TCG_CALL_NO_WG, void, env, ptr, tl, i32)
+
+DEF_HELPER_FLAGS_4(sve_st1sd_r, TCG_CALL_NO_WG, void, env, ptr, tl, i32)
diff --git a/target/arm/sve_helper.c b/target/arm/sve_helper.c
index e542725113..e259e910de 100644
--- a/target/arm/sve_helper.c
+++ b/target/arm/sve_helper.c
@@ -3023,3 +3023,214 @@ void HELPER(sve_ld4dd_r)(CPUARMState *env, void *vg,
 addr += 4 * 8;
 }
 }
+
+/*
+ * Store contiguous data, protected by a governing predicate.
+ */
+#define DO_ST1(NAME, FN, TYPEE, TYPEM, H)  \
+void HELPER(NAME)(CPUARMState *env, void *vg,  \
+  target_ulong addr, uint32_t desc)\
+{  \
+intptr_t i, oprsz = simd_oprsz(desc);  \
+intptr_t ra = GETPC(); \
+unsigned rd = simd_data(desc); \
+void *vd = >vfp.zregs[rd];\
+for (i = 0; i < oprsz; ) { \
+uint16_t pg = *(uint16_t *)(vg + H1_2(i >> 3));\
+do {   \
+if (pg & 1) {  \
+TYPEM m = *(TYPEE *)(vd + H(i));   \
+FN(env, addr, m, ra);  \
+}  \
+i += sizeof(TYPEE), pg >>= sizeof(TYPEE);  \
+addr += sizeof(TYPEM); \
+} while (i & 15);  \
+}  \
+}
+
+#define DO_ST1_D(NAME, FN, TYPEM)  \
+void HELPER(NAME)(CPUARMState *env, void *vg,  \
+  target_ulong addr, uint32_t desc)\
+{  \
+intptr_t i, oprsz = simd_oprsz(desc) / 8;  \
+intptr_t ra = GETPC(); \
+unsigned rd = simd_data(desc); \
+uint64_t *d = >vfp.zregs[rd].d[0];\
+uint8_t *pg = vg;  \
+for (i = 0; i < oprsz; i += 1) {   \
+if (pg[H1(i)] & 1) {   \
+FN(env, addr, d[i], ra);   \
+}  \
+addr += sizeof(TYPEM); \
+}  \
+}
+
+#define DO_ST2(NAME, FN, TYPEE, TYPEM, H)  \
+void HELPER(NAME)(CPUARMState *env, void *vg,  \
+  target_ulong addr, 

[Qemu-devel] [PATCH v2 42/67] target/arm: Implement SVE Integer Wide Immediate - Unpredicated Group

2018-02-17 Thread Richard Henderson
Signed-off-by: Richard Henderson 
---
 target/arm/helper-sve.h|  25 +
 target/arm/sve_helper.c|  41 ++
 target/arm/translate-sve.c | 135 +
 target/arm/sve.decode  |  26 +
 4 files changed, 227 insertions(+)

diff --git a/target/arm/helper-sve.h b/target/arm/helper-sve.h
index 1863106d0f..97bfe0f47b 100644
--- a/target/arm/helper-sve.h
+++ b/target/arm/helper-sve.h
@@ -680,3 +680,28 @@ DEF_HELPER_FLAGS_4(sve_brkns, TCG_CALL_NO_RWG, i32, ptr, 
ptr, ptr, i32)
 DEF_HELPER_FLAGS_3(sve_cntp, TCG_CALL_NO_RWG, i64, ptr, ptr, i32)
 
 DEF_HELPER_FLAGS_3(sve_while, TCG_CALL_NO_RWG, i32, ptr, i32, i32)
+
+DEF_HELPER_FLAGS_4(sve_subri_b, TCG_CALL_NO_RWG, void, ptr, ptr, i64, i32)
+DEF_HELPER_FLAGS_4(sve_subri_h, TCG_CALL_NO_RWG, void, ptr, ptr, i64, i32)
+DEF_HELPER_FLAGS_4(sve_subri_s, TCG_CALL_NO_RWG, void, ptr, ptr, i64, i32)
+DEF_HELPER_FLAGS_4(sve_subri_d, TCG_CALL_NO_RWG, void, ptr, ptr, i64, i32)
+
+DEF_HELPER_FLAGS_4(sve_smaxi_b, TCG_CALL_NO_RWG, void, ptr, ptr, i64, i32)
+DEF_HELPER_FLAGS_4(sve_smaxi_h, TCG_CALL_NO_RWG, void, ptr, ptr, i64, i32)
+DEF_HELPER_FLAGS_4(sve_smaxi_s, TCG_CALL_NO_RWG, void, ptr, ptr, i64, i32)
+DEF_HELPER_FLAGS_4(sve_smaxi_d, TCG_CALL_NO_RWG, void, ptr, ptr, i64, i32)
+
+DEF_HELPER_FLAGS_4(sve_smini_b, TCG_CALL_NO_RWG, void, ptr, ptr, i64, i32)
+DEF_HELPER_FLAGS_4(sve_smini_h, TCG_CALL_NO_RWG, void, ptr, ptr, i64, i32)
+DEF_HELPER_FLAGS_4(sve_smini_s, TCG_CALL_NO_RWG, void, ptr, ptr, i64, i32)
+DEF_HELPER_FLAGS_4(sve_smini_d, TCG_CALL_NO_RWG, void, ptr, ptr, i64, i32)
+
+DEF_HELPER_FLAGS_4(sve_umaxi_b, TCG_CALL_NO_RWG, void, ptr, ptr, i64, i32)
+DEF_HELPER_FLAGS_4(sve_umaxi_h, TCG_CALL_NO_RWG, void, ptr, ptr, i64, i32)
+DEF_HELPER_FLAGS_4(sve_umaxi_s, TCG_CALL_NO_RWG, void, ptr, ptr, i64, i32)
+DEF_HELPER_FLAGS_4(sve_umaxi_d, TCG_CALL_NO_RWG, void, ptr, ptr, i64, i32)
+
+DEF_HELPER_FLAGS_4(sve_umini_b, TCG_CALL_NO_RWG, void, ptr, ptr, i64, i32)
+DEF_HELPER_FLAGS_4(sve_umini_h, TCG_CALL_NO_RWG, void, ptr, ptr, i64, i32)
+DEF_HELPER_FLAGS_4(sve_umini_s, TCG_CALL_NO_RWG, void, ptr, ptr, i64, i32)
+DEF_HELPER_FLAGS_4(sve_umini_d, TCG_CALL_NO_RWG, void, ptr, ptr, i64, i32)
diff --git a/target/arm/sve_helper.c b/target/arm/sve_helper.c
index 80b78da834..4f45f11bff 100644
--- a/target/arm/sve_helper.c
+++ b/target/arm/sve_helper.c
@@ -803,6 +803,46 @@ DO_VPZ_D(sve_uminv_d, uint64_t, uint64_t, -1, DO_MIN)
 #undef DO_VPZ
 #undef DO_VPZ_D
 
+/* Two vector operand, one scalar operand, unpredicated.  */
+#define DO_ZZI(NAME, TYPE, OP)   \
+void HELPER(NAME)(void *vd, void *vn, uint64_t s64, uint32_t desc)   \
+{\
+intptr_t i, opr_sz = simd_oprsz(desc) / sizeof(TYPE);\
+TYPE s = s64, *d = vd, *n = vn;  \
+for (i = 0; i < opr_sz; ++i) {   \
+d[i] = OP(n[i], s);  \
+}\
+}
+
+#define DO_SUBR(X, Y)   (Y - X)
+
+DO_ZZI(sve_subri_b, uint8_t, DO_SUBR)
+DO_ZZI(sve_subri_h, uint16_t, DO_SUBR)
+DO_ZZI(sve_subri_s, uint32_t, DO_SUBR)
+DO_ZZI(sve_subri_d, uint64_t, DO_SUBR)
+
+DO_ZZI(sve_smaxi_b, int8_t, DO_MAX)
+DO_ZZI(sve_smaxi_h, int16_t, DO_MAX)
+DO_ZZI(sve_smaxi_s, int32_t, DO_MAX)
+DO_ZZI(sve_smaxi_d, int64_t, DO_MAX)
+
+DO_ZZI(sve_smini_b, int8_t, DO_MIN)
+DO_ZZI(sve_smini_h, int16_t, DO_MIN)
+DO_ZZI(sve_smini_s, int32_t, DO_MIN)
+DO_ZZI(sve_smini_d, int64_t, DO_MIN)
+
+DO_ZZI(sve_umaxi_b, uint8_t, DO_MAX)
+DO_ZZI(sve_umaxi_h, uint16_t, DO_MAX)
+DO_ZZI(sve_umaxi_s, uint32_t, DO_MAX)
+DO_ZZI(sve_umaxi_d, uint64_t, DO_MAX)
+
+DO_ZZI(sve_umini_b, uint8_t, DO_MIN)
+DO_ZZI(sve_umini_h, uint16_t, DO_MIN)
+DO_ZZI(sve_umini_s, uint32_t, DO_MIN)
+DO_ZZI(sve_umini_d, uint64_t, DO_MIN)
+
+#undef DO_ZZI
+
 #undef DO_AND
 #undef DO_ORR
 #undef DO_EOR
@@ -817,6 +857,7 @@ DO_VPZ_D(sve_uminv_d, uint64_t, uint64_t, -1, DO_MIN)
 #undef DO_ASR
 #undef DO_LSR
 #undef DO_LSL
+#undef DO_SUBR
 
 /* Similar to the ARM LastActiveElement pseudocode function, except the
result is multiplied by the element size.  This includes the not found
diff --git a/target/arm/translate-sve.c b/target/arm/translate-sve.c
index 7571d02237..72abcb543a 100644
--- a/target/arm/translate-sve.c
+++ b/target/arm/translate-sve.c
@@ -81,6 +81,11 @@ static inline int expand_imm_sh8s(int x)
 return (int8_t)x << (x & 0x100 ? 8 : 0);
 }
 
+static inline int expand_imm_sh8u(int x)
+{
+return (uint8_t)x << (x & 0x100 ? 8 : 0);
+}
+
 /*
  * Include the generated decoder.
  */
@@ -2974,6 +2979,136 @@ static void trans_DUP_i(DisasContext *s, arg_DUP_i *a, 
uint32_t insn)
 tcg_gen_gvec_dup64i(dofs, vsz, vsz, dup_const(a->esz, a->imm));
 }
 
+static void trans_ADD_zzi(DisasContext *s, arg_rri_esz *a, uint32_t insn)
+{
+

[Qemu-devel] [PATCH v2 67/67] target/arm: Implement SVE floating-point unary operations

2018-02-17 Thread Richard Henderson
Signed-off-by: Richard Henderson 
---
 target/arm/helper-sve.h| 14 ++
 target/arm/sve_helper.c|  8 
 target/arm/translate-sve.c | 28 
 target/arm/sve.decode  |  4 
 4 files changed, 54 insertions(+)

diff --git a/target/arm/helper-sve.h b/target/arm/helper-sve.h
index 749bab0b38..5cebc9121d 100644
--- a/target/arm/helper-sve.h
+++ b/target/arm/helper-sve.h
@@ -999,6 +999,20 @@ DEF_HELPER_FLAGS_5(sve_frintx_s, TCG_CALL_NO_RWG,
 DEF_HELPER_FLAGS_5(sve_frintx_d, TCG_CALL_NO_RWG,
void, ptr, ptr, ptr, ptr, i32)
 
+DEF_HELPER_FLAGS_5(sve_frecpx_h, TCG_CALL_NO_RWG,
+   void, ptr, ptr, ptr, ptr, i32)
+DEF_HELPER_FLAGS_5(sve_frecpx_s, TCG_CALL_NO_RWG,
+   void, ptr, ptr, ptr, ptr, i32)
+DEF_HELPER_FLAGS_5(sve_frecpx_d, TCG_CALL_NO_RWG,
+   void, ptr, ptr, ptr, ptr, i32)
+
+DEF_HELPER_FLAGS_5(sve_fsqrt_h, TCG_CALL_NO_RWG,
+   void, ptr, ptr, ptr, ptr, i32)
+DEF_HELPER_FLAGS_5(sve_fsqrt_s, TCG_CALL_NO_RWG,
+   void, ptr, ptr, ptr, ptr, i32)
+DEF_HELPER_FLAGS_5(sve_fsqrt_d, TCG_CALL_NO_RWG,
+   void, ptr, ptr, ptr, ptr, i32)
+
 DEF_HELPER_FLAGS_5(sve_scvt_hh, TCG_CALL_NO_RWG,
void, ptr, ptr, ptr, ptr, i32)
 DEF_HELPER_FLAGS_5(sve_scvt_sh, TCG_CALL_NO_RWG,
diff --git a/target/arm/sve_helper.c b/target/arm/sve_helper.c
index 7950710be7..4f0985a29e 100644
--- a/target/arm/sve_helper.c
+++ b/target/arm/sve_helper.c
@@ -3208,6 +3208,14 @@ DO_ZPZ_FP(sve_frintx_h, uint16_t, H1_2, 
float16_round_to_int)
 DO_ZPZ_FP(sve_frintx_s, uint32_t, H1_4, float32_round_to_int)
 DO_ZPZ_FP_D(sve_frintx_d, uint64_t, float64_round_to_int)
 
+DO_ZPZ_FP(sve_frecpx_h, uint16_t, H1_2, helper_frecpx_f16)
+DO_ZPZ_FP(sve_frecpx_s, uint32_t, H1_4, helper_frecpx_f32)
+DO_ZPZ_FP_D(sve_frecpx_d, uint64_t, helper_frecpx_f64)
+
+DO_ZPZ_FP(sve_fsqrt_h, uint16_t, H1_2, float16_sqrt)
+DO_ZPZ_FP(sve_fsqrt_s, uint32_t, H1_4, float32_sqrt)
+DO_ZPZ_FP_D(sve_fsqrt_d, uint64_t, float64_sqrt)
+
 DO_ZPZ_FP(sve_scvt_hh, uint16_t, H1_2, int16_to_float16)
 DO_ZPZ_FP(sve_scvt_sh, uint32_t, H1_4, int32_to_float16)
 DO_ZPZ_FP(sve_scvt_ss, uint32_t, H1_4, int32_to_float32)
diff --git a/target/arm/translate-sve.c b/target/arm/translate-sve.c
index 5f1c4984b8..f1ff03 100644
--- a/target/arm/translate-sve.c
+++ b/target/arm/translate-sve.c
@@ -3831,6 +3831,34 @@ static void trans_FRINTA(DisasContext *s, arg_rpr_esz 
*a, uint32_t insn)
 do_frint_mode(s, a, float_round_ties_away);
 }
 
+static void trans_FRECPX(DisasContext *s, arg_rpr_esz *a, uint32_t insn)
+{
+static gen_helper_gvec_3_ptr * const fns[3] = {
+gen_helper_sve_frecpx_h,
+gen_helper_sve_frecpx_s,
+gen_helper_sve_frecpx_d
+};
+if (a->esz == 0) {
+unallocated_encoding(s);
+} else {
+do_zpz_ptr(s, a->rd, a->rn, a->pg, a->esz == MO_16, fns[a->esz - 1]);
+}
+}
+
+static void trans_FSQRT(DisasContext *s, arg_rpr_esz *a, uint32_t insn)
+{
+static gen_helper_gvec_3_ptr * const fns[3] = {
+gen_helper_sve_fsqrt_h,
+gen_helper_sve_fsqrt_s,
+gen_helper_sve_fsqrt_d
+};
+if (a->esz == 0) {
+unallocated_encoding(s);
+} else {
+do_zpz_ptr(s, a->rd, a->rn, a->pg, a->esz == MO_16, fns[a->esz - 1]);
+}
+}
+
 static void trans_SCVTF_hh(DisasContext *s, arg_rpr_esz *a, uint32_t insn)
 {
 do_zpz_ptr(s, a->rd, a->rn, a->pg, true, gen_helper_sve_scvt_hh);
diff --git a/target/arm/sve.decode b/target/arm/sve.decode
index e06c0c5279..fbd9cf1384 100644
--- a/target/arm/sve.decode
+++ b/target/arm/sve.decode
@@ -857,6 +857,10 @@ FRINTA 01100101 .. 000 100 101 ... . . 
@rd_pg_rn
 FRINTX 01100101 .. 000 110 101 ... . . @rd_pg_rn
 FRINTI 01100101 .. 000 111 101 ... . . @rd_pg_rn
 
+# SVE floating-point unary operations
+FRECPX 01100101 .. 001 100 101 ... . . @rd_pg_rn
+FSQRT  01100101 .. 001 101 101 ... . . @rd_pg_rn
+
 # SVE integer convert to floating-point
 SCVTF_hh   01100101 01 010 01 0 101 ... . .@rd_pg_rn_e0
 SCVTF_sh   01100101 01 010 10 0 101 ... . .@rd_pg_rn_e0
-- 
2.14.3




[Qemu-devel] [PATCH v2 61/67] target/arm: Implement SVE Floating Point Unary Operations - Unpredicated Group

2018-02-17 Thread Richard Henderson
Signed-off-by: Richard Henderson 
---
 target/arm/helper.h|  8 
 target/arm/translate-sve.c | 43 +++
 target/arm/vec_helper.c| 20 
 target/arm/sve.decode  |  5 +
 4 files changed, 76 insertions(+)

diff --git a/target/arm/helper.h b/target/arm/helper.h
index a8d824b085..4bfefe42b2 100644
--- a/target/arm/helper.h
+++ b/target/arm/helper.h
@@ -565,6 +565,14 @@ DEF_HELPER_2(dc_zva, void, env, i64)
 DEF_HELPER_FLAGS_2(neon_pmull_64_lo, TCG_CALL_NO_RWG_SE, i64, i64, i64)
 DEF_HELPER_FLAGS_2(neon_pmull_64_hi, TCG_CALL_NO_RWG_SE, i64, i64, i64)
 
+DEF_HELPER_FLAGS_4(gvec_frecpe_h, TCG_CALL_NO_RWG, void, ptr, ptr, ptr, i32)
+DEF_HELPER_FLAGS_4(gvec_frecpe_s, TCG_CALL_NO_RWG, void, ptr, ptr, ptr, i32)
+DEF_HELPER_FLAGS_4(gvec_frecpe_d, TCG_CALL_NO_RWG, void, ptr, ptr, ptr, i32)
+
+DEF_HELPER_FLAGS_4(gvec_frsqrte_h, TCG_CALL_NO_RWG, void, ptr, ptr, ptr, i32)
+DEF_HELPER_FLAGS_4(gvec_frsqrte_s, TCG_CALL_NO_RWG, void, ptr, ptr, ptr, i32)
+DEF_HELPER_FLAGS_4(gvec_frsqrte_d, TCG_CALL_NO_RWG, void, ptr, ptr, ptr, i32)
+
 DEF_HELPER_FLAGS_5(gvec_fadd_h, TCG_CALL_NO_RWG, void, ptr, ptr, ptr, ptr, i32)
 DEF_HELPER_FLAGS_5(gvec_fadd_s, TCG_CALL_NO_RWG, void, ptr, ptr, ptr, ptr, i32)
 DEF_HELPER_FLAGS_5(gvec_fadd_d, TCG_CALL_NO_RWG, void, ptr, ptr, ptr, ptr, i32)
diff --git a/target/arm/translate-sve.c b/target/arm/translate-sve.c
index a77ddf0f4b..463ff7b690 100644
--- a/target/arm/translate-sve.c
+++ b/target/arm/translate-sve.c
@@ -3235,6 +3235,49 @@ DO_VPZ(FMAXNMV, fmaxnmv)
 DO_VPZ(FMINV, fminv)
 DO_VPZ(FMAXV, fmaxv)
 
+/*
+ *** SVE Floating Point Unary Operations - Unpredicated Group
+ */
+
+static void do_zz_fp(DisasContext *s, arg_rr_esz *a, gen_helper_gvec_2_ptr *fn)
+{
+unsigned vsz = vec_full_reg_size(s);
+TCGv_ptr status = get_fpstatus_ptr(a->esz == MO_16);
+
+tcg_gen_gvec_2_ptr(vec_full_reg_offset(s, a->rd),
+   vec_full_reg_offset(s, a->rn),
+   status, vsz, vsz, 0, fn);
+tcg_temp_free_ptr(status);
+}
+
+static void trans_FRECPE(DisasContext *s, arg_rr_esz *a, uint32_t insn)
+{
+static gen_helper_gvec_2_ptr * const fns[3] = {
+gen_helper_gvec_frecpe_h,
+gen_helper_gvec_frecpe_s,
+gen_helper_gvec_frecpe_d,
+};
+if (a->esz == 0) {
+unallocated_encoding(s);
+} else {
+do_zz_fp(s, a, fns[a->esz - 1]);
+}
+}
+
+static void trans_FRSQRTE(DisasContext *s, arg_rr_esz *a, uint32_t insn)
+{
+static gen_helper_gvec_2_ptr * const fns[3] = {
+gen_helper_gvec_frsqrte_h,
+gen_helper_gvec_frsqrte_s,
+gen_helper_gvec_frsqrte_d,
+};
+if (a->esz == 0) {
+unallocated_encoding(s);
+} else {
+do_zz_fp(s, a, fns[a->esz - 1]);
+}
+}
+
 /*
  *** SVE Floating Point Accumulating Reduction Group
  */
diff --git a/target/arm/vec_helper.c b/target/arm/vec_helper.c
index e711a3217d..60dc07cf87 100644
--- a/target/arm/vec_helper.c
+++ b/target/arm/vec_helper.c
@@ -40,6 +40,26 @@
 #define H4(x)   (x)
 #endif
 
+#define DO_2OP(NAME, FUNC, TYPE) \
+void HELPER(NAME)(void *vd, void *vn, void *stat, uint32_t desc)  \
+{ \
+intptr_t i, oprsz = simd_oprsz(desc); \
+TYPE *d = vd, *n = vn;\
+for (i = 0; i < oprsz / sizeof(TYPE); i++) {  \
+d[i] = FUNC(n[i], stat);  \
+} \
+}
+
+DO_2OP(gvec_frecpe_h, helper_recpe_f16, float16)
+DO_2OP(gvec_frecpe_s, helper_recpe_f32, float32)
+DO_2OP(gvec_frecpe_d, helper_recpe_f64, float64)
+
+DO_2OP(gvec_frsqrte_h, helper_rsqrte_f16, float16)
+DO_2OP(gvec_frsqrte_s, helper_rsqrte_f32, float32)
+DO_2OP(gvec_frsqrte_d, helper_rsqrte_f64, float64)
+
+#undef DO_2OP
+
 /* Floating-point trigonometric starting value.
  * See the ARM ARM pseudocode function FPTrigSMul.
  */
diff --git a/target/arm/sve.decode b/target/arm/sve.decode
index feb8c65e89..112e85174c 100644
--- a/target/arm/sve.decode
+++ b/target/arm/sve.decode
@@ -747,6 +747,11 @@ FMINNMV01100101 .. 000 101 001 ... . . 
@rd_pg_rn
 FMAXV  01100101 .. 000 110 001 ... . . @rd_pg_rn
 FMINV  01100101 .. 000 111 001 ... . . @rd_pg_rn
 
+## SVE Floating Point Unary Operations - Unpredicated Group
+
+FRECPE 01100101 .. 001 110 001110 . .  @rd_rn
+FRSQRTE01100101 .. 001 111 001110 . .  @rd_rn
+
 ### SVE FP Accumulating Reduction Group
 
 # SVE floating-point serial reduction (predicated)
-- 
2.14.3




[Qemu-devel] [PATCH v2 38/67] target/arm: Implement SVE Partition Break Group

2018-02-17 Thread Richard Henderson
Signed-off-by: Richard Henderson 
---
 target/arm/helper-sve.h|  18 
 target/arm/sve_helper.c| 247 +
 target/arm/translate-sve.c |  96 ++
 target/arm/sve.decode  |  19 
 4 files changed, 380 insertions(+)

diff --git a/target/arm/helper-sve.h b/target/arm/helper-sve.h
index ae38c0a4be..f0a3ed3414 100644
--- a/target/arm/helper-sve.h
+++ b/target/arm/helper-sve.h
@@ -658,3 +658,21 @@ DEF_HELPER_FLAGS_5(sve_orn_, TCG_CALL_NO_RWG, void, 
ptr, ptr, ptr, ptr, i32)
 DEF_HELPER_FLAGS_5(sve_nor_, TCG_CALL_NO_RWG, void, ptr, ptr, ptr, ptr, 
i32)
 DEF_HELPER_FLAGS_5(sve_nand_, TCG_CALL_NO_RWG,
void, ptr, ptr, ptr, ptr, i32)
+
+DEF_HELPER_FLAGS_5(sve_brkpa, TCG_CALL_NO_RWG, void, ptr, ptr, ptr, ptr, i32)
+DEF_HELPER_FLAGS_5(sve_brkpb, TCG_CALL_NO_RWG, void, ptr, ptr, ptr, ptr, i32)
+DEF_HELPER_FLAGS_5(sve_brkpas, TCG_CALL_NO_RWG, i32, ptr, ptr, ptr, ptr, i32)
+DEF_HELPER_FLAGS_5(sve_brkpbs, TCG_CALL_NO_RWG, i32, ptr, ptr, ptr, ptr, i32)
+
+DEF_HELPER_FLAGS_4(sve_brka_z, TCG_CALL_NO_RWG, void, ptr, ptr, ptr, i32)
+DEF_HELPER_FLAGS_4(sve_brkb_z, TCG_CALL_NO_RWG, void, ptr, ptr, ptr, i32)
+DEF_HELPER_FLAGS_4(sve_brka_m, TCG_CALL_NO_RWG, void, ptr, ptr, ptr, i32)
+DEF_HELPER_FLAGS_4(sve_brkb_m, TCG_CALL_NO_RWG, void, ptr, ptr, ptr, i32)
+
+DEF_HELPER_FLAGS_4(sve_brkas_z, TCG_CALL_NO_RWG, i32, ptr, ptr, ptr, i32)
+DEF_HELPER_FLAGS_4(sve_brkbs_z, TCG_CALL_NO_RWG, i32, ptr, ptr, ptr, i32)
+DEF_HELPER_FLAGS_4(sve_brkas_m, TCG_CALL_NO_RWG, i32, ptr, ptr, ptr, i32)
+DEF_HELPER_FLAGS_4(sve_brkbs_m, TCG_CALL_NO_RWG, i32, ptr, ptr, ptr, i32)
+
+DEF_HELPER_FLAGS_4(sve_brkn, TCG_CALL_NO_RWG, void, ptr, ptr, ptr, i32)
+DEF_HELPER_FLAGS_4(sve_brkns, TCG_CALL_NO_RWG, i32, ptr, ptr, ptr, i32)
diff --git a/target/arm/sve_helper.c b/target/arm/sve_helper.c
index b74db681f2..d6d2220f8b 100644
--- a/target/arm/sve_helper.c
+++ b/target/arm/sve_helper.c
@@ -2455,3 +2455,250 @@ DO_CMP_PPZI_D(sve_cmpls_ppzi_d, uint64_t, <=)
 #undef DO_CMP_PPZI_S
 #undef DO_CMP_PPZI_D
 #undef DO_CMP_PPZI
+
+/* Similar to the ARM LastActive pseudocode function.  */
+static bool last_active_pred(void *vd, void *vg, intptr_t oprsz)
+{
+intptr_t i;
+
+for (i = QEMU_ALIGN_UP(oprsz, 8) - 8; i >= 0; i -= 8) {
+uint64_t pg = *(uint64_t *)(vg + i);
+if (pg) {
+return (pow2floor(pg) & *(uint64_t *)(vd + i)) != 0;
+}
+}
+return 0;
+}
+
+/* Compute a mask into RETB that is true for all G, up to and including
+ * (if after) or excluding (if !after) the first G & N.
+ * Return true if BRK found.
+ */
+static bool compute_brk(uint64_t *retb, uint64_t n, uint64_t g,
+bool brk, bool after)
+{
+uint64_t b;
+
+if (brk) {
+b = 0;
+} else if ((g & n) == 0) {
+/* For all G, no N are set; break not found.  */
+b = g;
+} else {
+/* Break somewhere in N.  Locate it.  */
+b = g & n;/* guard true, pred true*/
+b = b & -b;   /* first such */
+if (after) {
+b = b | (b - 1);  /* break after same */
+} else {
+b = b - 1;/* break before same */
+}
+brk = true;
+}
+
+*retb = b;
+return brk;
+}
+
+/* Compute a zeroing BRK.  */
+static void compute_brk_z(uint64_t *d, uint64_t *n, uint64_t *g,
+  intptr_t oprsz, bool after)
+{
+bool brk = false;
+intptr_t i;
+
+for (i = 0; i < DIV_ROUND_UP(oprsz, 8); ++i) {
+uint64_t this_b, this_g = g[i];
+
+brk = compute_brk(_b, n[i], this_g, brk, after);
+d[i] = this_b & this_g;
+}
+}
+
+/* Likewise, but also compute flags.  */
+static uint32_t compute_brks_z(uint64_t *d, uint64_t *n, uint64_t *g,
+   intptr_t oprsz, bool after)
+{
+uint32_t flags = PREDTEST_INIT;
+bool brk = false;
+intptr_t i;
+
+for (i = 0; i < DIV_ROUND_UP(oprsz, 8); ++i) {
+uint64_t this_b, this_d, this_g = g[i];
+
+brk = compute_brk(_b, n[i], this_g, brk, after);
+d[i] = this_d = this_b & this_g;
+flags = iter_predtest_fwd(this_d, this_g, flags);
+}
+return flags;
+}
+
+/* Given a computation function, compute a merging BRK.  */
+static void compute_brk_m(uint64_t *d, uint64_t *n, uint64_t *g,
+  intptr_t oprsz, bool after)
+{
+bool brk = false;
+intptr_t i;
+
+for (i = 0; i < DIV_ROUND_UP(oprsz, 8); ++i) {
+uint64_t this_b, this_g = g[i];
+
+brk = compute_brk(_b, n[i], this_g, brk, after);
+d[i] = (this_b & this_g) | (d[i] & ~this_g);
+}
+}
+
+/* Likewise, but also compute flags.  */
+static uint32_t compute_brks_m(uint64_t *d, uint64_t *n, uint64_t *g,
+   intptr_t oprsz, bool after)
+{
+uint32_t flags = PREDTEST_INIT;
+bool brk = false;
+intptr_t i;
+
+for (i = 

[Qemu-devel] [PATCH v2 40/67] target/arm: Implement SVE Integer Compare - Scalars Group

2018-02-17 Thread Richard Henderson
Signed-off-by: Richard Henderson 
---
 target/arm/helper-sve.h|  2 +
 target/arm/sve_helper.c| 31 
 target/arm/translate-sve.c | 92 ++
 target/arm/sve.decode  |  8 
 4 files changed, 133 insertions(+)

diff --git a/target/arm/helper-sve.h b/target/arm/helper-sve.h
index dd4f8f754d..1863106d0f 100644
--- a/target/arm/helper-sve.h
+++ b/target/arm/helper-sve.h
@@ -678,3 +678,5 @@ DEF_HELPER_FLAGS_4(sve_brkn, TCG_CALL_NO_RWG, void, ptr, 
ptr, ptr, i32)
 DEF_HELPER_FLAGS_4(sve_brkns, TCG_CALL_NO_RWG, i32, ptr, ptr, ptr, i32)
 
 DEF_HELPER_FLAGS_3(sve_cntp, TCG_CALL_NO_RWG, i64, ptr, ptr, i32)
+
+DEF_HELPER_FLAGS_3(sve_while, TCG_CALL_NO_RWG, i32, ptr, i32, i32)
diff --git a/target/arm/sve_helper.c b/target/arm/sve_helper.c
index dd884bdd1c..80b78da834 100644
--- a/target/arm/sve_helper.c
+++ b/target/arm/sve_helper.c
@@ -2716,3 +2716,34 @@ uint64_t HELPER(sve_cntp)(void *vn, void *vg, uint32_t 
pred_desc)
 }
 return sum;
 }
+
+uint32_t HELPER(sve_while)(void *vd, uint32_t count, uint32_t pred_desc)
+{
+uintptr_t oprsz = extract32(pred_desc, 0, SIMD_OPRSZ_BITS) + 2;
+intptr_t esz = extract32(pred_desc, SIMD_DATA_SHIFT, 2);
+uint64_t esz_mask = pred_esz_masks[esz];
+ARMPredicateReg *d = vd;
+uint32_t flags;
+intptr_t i;
+
+/* Begin with a zero predicate register.  */
+flags = do_zero(d, oprsz);
+if (count == 0) {
+return flags;
+}
+
+/* Scale from predicate element count to bits.  */
+count <<= esz;
+/* Bound to the bits in the predicate.  */
+count = MIN(count, oprsz * 8);
+
+/* Set all of the requested bits.  */
+for (i = 0; i < count / 64; ++i) {
+d->p[i] = esz_mask;
+}
+if (count & 63) {
+d->p[i] = ~(-1ull << (count & 63)) & esz_mask;
+}
+
+return predtest_ones(d, oprsz, esz_mask);
+}
diff --git a/target/arm/translate-sve.c b/target/arm/translate-sve.c
index 038800cc86..4b92a55c21 100644
--- a/target/arm/translate-sve.c
+++ b/target/arm/translate-sve.c
@@ -2847,6 +2847,98 @@ static void trans_SINCDECP_z(DisasContext *s, 
arg_incdec2_pred *a,
 do_sat_addsub_vec(s, a->esz, a->rd, a->rn, val, a->u, a->d);
 }
 
+/*
+ *** SVE Integer Compare Scalars Group
+ */
+
+static void trans_CTERM(DisasContext *s, arg_CTERM *a, uint32_t insn)
+{
+TCGCond cond = (a->ne ? TCG_COND_NE : TCG_COND_EQ);
+TCGv_i64 rn = read_cpu_reg(s, a->rn, a->sf);
+TCGv_i64 rm = read_cpu_reg(s, a->rm, a->sf);
+TCGv_i64 cmp = tcg_temp_new_i64();
+
+tcg_gen_setcond_i64(cond, cmp, rn, rm);
+tcg_gen_extrl_i64_i32(cpu_NF, cmp);
+tcg_temp_free_i64(cmp);
+
+/* VF = !NF & !CF.  */
+tcg_gen_xori_i32(cpu_VF, cpu_NF, 1);
+tcg_gen_andc_i32(cpu_VF, cpu_VF, cpu_CF);
+
+/* Both NF and VF actually look at bit 31.  */
+tcg_gen_neg_i32(cpu_NF, cpu_NF);
+tcg_gen_neg_i32(cpu_VF, cpu_VF);
+}
+
+static void trans_WHILE(DisasContext *s, arg_WHILE *a, uint32_t insn)
+{
+TCGv_i64 op0 = read_cpu_reg(s, a->rn, 1);
+TCGv_i64 op1 = read_cpu_reg(s, a->rm, 1);
+TCGv_i64 t0 = tcg_temp_new_i64();
+TCGv_i64 t1 = tcg_temp_new_i64();
+TCGv_i32 t2, t3;
+TCGv_ptr ptr;
+unsigned desc, vsz = vec_full_reg_size(s);
+TCGCond cond;
+
+if (!a->sf) {
+if (a->u) {
+tcg_gen_ext32u_i64(op0, op0);
+tcg_gen_ext32u_i64(op1, op1);
+} else {
+tcg_gen_ext32s_i64(op0, op0);
+tcg_gen_ext32s_i64(op1, op1);
+}
+}
+
+/* For the helper, compress the different conditions into a computation
+ * of how many iterations for which the condition is true.
+ *
+ * This is slightly complicated by 0 <= UINT64_MAX, which is nominally
+ * 2**64 iterations, overflowing to 0.  Of course, predicate registers
+ * aren't that large, so any value >= predicate size is sufficient.
+ */
+tcg_gen_sub_i64(t0, op1, op0);
+
+/* t0 = MIN(op1 - op0, vsz).  */
+if (a->eq) {
+/* Equality means one more iteration.  */
+tcg_gen_movi_i64(t1, vsz - 1);
+tcg_gen_movcond_i64(TCG_COND_LTU, t0, t0, t1, t0, t1);
+tcg_gen_addi_i64(t0, t0, 1);
+} else {
+tcg_gen_movi_i64(t1, vsz);
+tcg_gen_movcond_i64(TCG_COND_LTU, t0, t0, t1, t0, t1);
+}
+
+/* t0 = (condition true ? t0 : 0).  */
+cond = (a->u
+? (a->eq ? TCG_COND_LEU : TCG_COND_LTU)
+: (a->eq ? TCG_COND_LE : TCG_COND_LT));
+tcg_gen_movi_i64(t1, 0);
+tcg_gen_movcond_i64(cond, t0, op0, op1, t0, t1);
+
+t2 = tcg_temp_new_i32();
+tcg_gen_extrl_i64_i32(t2, t0);
+tcg_temp_free_i64(t0);
+tcg_temp_free_i64(t1);
+
+desc = (vsz / 8) - 2;
+desc = deposit32(desc, SIMD_DATA_SHIFT, 2, a->esz);
+t3 = tcg_const_i32(desc);
+
+ptr = tcg_temp_new_ptr();
+tcg_gen_addi_ptr(ptr, cpu_env, pred_full_reg_offset(s, a->rd));
+
+gen_helper_sve_while(t2, ptr, t2, t3);
+  

[Qemu-devel] [PATCH v2 57/67] target/arm: Implement SVE floating-point compare vectors

2018-02-17 Thread Richard Henderson
Signed-off-by: Richard Henderson 
---
 target/arm/helper-sve.h| 49 +++
 target/arm/sve_helper.c| 64 ++
 target/arm/translate-sve.c | 41 +
 target/arm/sve.decode  | 11 
 4 files changed, 165 insertions(+)

diff --git a/target/arm/helper-sve.h b/target/arm/helper-sve.h
index 3cb7ab9ef2..30373e3fc7 100644
--- a/target/arm/helper-sve.h
+++ b/target/arm/helper-sve.h
@@ -839,6 +839,55 @@ DEF_HELPER_FLAGS_5(sve_ucvt_ds, TCG_CALL_NO_RWG,
 DEF_HELPER_FLAGS_5(sve_ucvt_dd, TCG_CALL_NO_RWG,
void, ptr, ptr, ptr, ptr, i32)
 
+DEF_HELPER_FLAGS_6(sve_fcmge_h, TCG_CALL_NO_RWG,
+   void, ptr, ptr, ptr, ptr, ptr, i32)
+DEF_HELPER_FLAGS_6(sve_fcmge_s, TCG_CALL_NO_RWG,
+   void, ptr, ptr, ptr, ptr, ptr, i32)
+DEF_HELPER_FLAGS_6(sve_fcmge_d, TCG_CALL_NO_RWG,
+   void, ptr, ptr, ptr, ptr, ptr, i32)
+
+DEF_HELPER_FLAGS_6(sve_fcmgt_h, TCG_CALL_NO_RWG,
+   void, ptr, ptr, ptr, ptr, ptr, i32)
+DEF_HELPER_FLAGS_6(sve_fcmgt_s, TCG_CALL_NO_RWG,
+   void, ptr, ptr, ptr, ptr, ptr, i32)
+DEF_HELPER_FLAGS_6(sve_fcmgt_d, TCG_CALL_NO_RWG,
+   void, ptr, ptr, ptr, ptr, ptr, i32)
+
+DEF_HELPER_FLAGS_6(sve_fcmeq_h, TCG_CALL_NO_RWG,
+   void, ptr, ptr, ptr, ptr, ptr, i32)
+DEF_HELPER_FLAGS_6(sve_fcmeq_s, TCG_CALL_NO_RWG,
+   void, ptr, ptr, ptr, ptr, ptr, i32)
+DEF_HELPER_FLAGS_6(sve_fcmeq_d, TCG_CALL_NO_RWG,
+   void, ptr, ptr, ptr, ptr, ptr, i32)
+
+DEF_HELPER_FLAGS_6(sve_fcmne_h, TCG_CALL_NO_RWG,
+   void, ptr, ptr, ptr, ptr, ptr, i32)
+DEF_HELPER_FLAGS_6(sve_fcmne_s, TCG_CALL_NO_RWG,
+   void, ptr, ptr, ptr, ptr, ptr, i32)
+DEF_HELPER_FLAGS_6(sve_fcmne_d, TCG_CALL_NO_RWG,
+   void, ptr, ptr, ptr, ptr, ptr, i32)
+
+DEF_HELPER_FLAGS_6(sve_fcmuo_h, TCG_CALL_NO_RWG,
+   void, ptr, ptr, ptr, ptr, ptr, i32)
+DEF_HELPER_FLAGS_6(sve_fcmuo_s, TCG_CALL_NO_RWG,
+   void, ptr, ptr, ptr, ptr, ptr, i32)
+DEF_HELPER_FLAGS_6(sve_fcmuo_d, TCG_CALL_NO_RWG,
+   void, ptr, ptr, ptr, ptr, ptr, i32)
+
+DEF_HELPER_FLAGS_6(sve_facge_h, TCG_CALL_NO_RWG,
+   void, ptr, ptr, ptr, ptr, ptr, i32)
+DEF_HELPER_FLAGS_6(sve_facge_s, TCG_CALL_NO_RWG,
+   void, ptr, ptr, ptr, ptr, ptr, i32)
+DEF_HELPER_FLAGS_6(sve_facge_d, TCG_CALL_NO_RWG,
+   void, ptr, ptr, ptr, ptr, ptr, i32)
+
+DEF_HELPER_FLAGS_6(sve_facgt_h, TCG_CALL_NO_RWG,
+   void, ptr, ptr, ptr, ptr, ptr, i32)
+DEF_HELPER_FLAGS_6(sve_facgt_s, TCG_CALL_NO_RWG,
+   void, ptr, ptr, ptr, ptr, ptr, i32)
+DEF_HELPER_FLAGS_6(sve_facgt_d, TCG_CALL_NO_RWG,
+   void, ptr, ptr, ptr, ptr, ptr, i32)
+
 DEF_HELPER_FLAGS_3(sve_fmla_zpzzz_h, TCG_CALL_NO_RWG, void, env, ptr, i32)
 DEF_HELPER_FLAGS_3(sve_fmla_zpzzz_s, TCG_CALL_NO_RWG, void, env, ptr, i32)
 DEF_HELPER_FLAGS_3(sve_fmla_zpzzz_d, TCG_CALL_NO_RWG, void, env, ptr, i32)
diff --git a/target/arm/sve_helper.c b/target/arm/sve_helper.c
index 4edd3d4367..ace613684d 100644
--- a/target/arm/sve_helper.c
+++ b/target/arm/sve_helper.c
@@ -3100,6 +3100,70 @@ DO_FMLA(sve_fnmls_zpzzz_d, 64, , 1, 1)
 
 #undef DO_FMLA
 
+/* Two operand floating-point comparison controlled by a predicate.
+ * Unlike the integer version, we are not allowed to optimistically
+ * compare operands, since the comparison may have side effects wrt
+ * the FPSR.
+ */
+#define DO_FPCMP_PPZZ(NAME, TYPE, H, OP)\
+void HELPER(NAME)(void *vd, void *vn, void *vm, void *vg,   \
+  void *status, uint32_t desc)  \
+{   \
+intptr_t opr_sz = simd_oprsz(desc); \
+intptr_t i = opr_sz, j = ((opr_sz - 1) & -64) >> 3; \
+do {\
+uint64_t out = 0;   \
+uint64_t pg = *(uint64_t *)(vg + j);\
+do {\
+i -= sizeof(TYPE), out <<= sizeof(TYPE);\
+if ((pg >> (i & 63)) & 1) { \
+TYPE nn = *(TYPE *)(vn + H(i)); \
+TYPE mm = *(TYPE *)(vm + H(i)); \
+out |= OP(TYPE, nn, mm, status);\
+}   \
+} while (i & 63);   \
+*(uint64_t *)(vd + j) = out;\
+j -= 8;   

[Qemu-devel] [PATCH v2 64/67] target/arm: Implement SVE floating-point convert precision

2018-02-17 Thread Richard Henderson
Signed-off-by: Richard Henderson 
---
 target/arm/helper-sve.h| 13 +
 target/arm/sve_helper.c| 27 +++
 target/arm/translate-sve.c | 30 ++
 target/arm/sve.decode  |  8 
 4 files changed, 78 insertions(+)

diff --git a/target/arm/helper-sve.h b/target/arm/helper-sve.h
index ce5fe24dc2..bac4bfdc60 100644
--- a/target/arm/helper-sve.h
+++ b/target/arm/helper-sve.h
@@ -942,6 +942,19 @@ DEF_HELPER_FLAGS_6(sve_fmins_s, TCG_CALL_NO_RWG,
 DEF_HELPER_FLAGS_6(sve_fmins_d, TCG_CALL_NO_RWG,
void, ptr, ptr, ptr, i64, ptr, i32)
 
+DEF_HELPER_FLAGS_5(sve_fcvt_sh, TCG_CALL_NO_RWG,
+   void, ptr, ptr, ptr, ptr, i32)
+DEF_HELPER_FLAGS_5(sve_fcvt_dh, TCG_CALL_NO_RWG,
+   void, ptr, ptr, ptr, ptr, i32)
+DEF_HELPER_FLAGS_5(sve_fcvt_hs, TCG_CALL_NO_RWG,
+   void, ptr, ptr, ptr, ptr, i32)
+DEF_HELPER_FLAGS_5(sve_fcvt_ds, TCG_CALL_NO_RWG,
+   void, ptr, ptr, ptr, ptr, i32)
+DEF_HELPER_FLAGS_5(sve_fcvt_hd, TCG_CALL_NO_RWG,
+   void, ptr, ptr, ptr, ptr, i32)
+DEF_HELPER_FLAGS_5(sve_fcvt_sd, TCG_CALL_NO_RWG,
+   void, ptr, ptr, ptr, ptr, i32)
+
 DEF_HELPER_FLAGS_5(sve_scvt_hh, TCG_CALL_NO_RWG,
void, ptr, ptr, ptr, ptr, i32)
 DEF_HELPER_FLAGS_5(sve_scvt_sh, TCG_CALL_NO_RWG,
diff --git a/target/arm/sve_helper.c b/target/arm/sve_helper.c
index 53e3516f47..9db01ac2f2 100644
--- a/target/arm/sve_helper.c
+++ b/target/arm/sve_helper.c
@@ -3157,6 +3157,33 @@ void HELPER(NAME)(void *vd, void *vn, void *vg, void 
*status, uint32_t desc) \
 }   \
 }
 
+static inline float32 float16_to_float32_ieee(float16 f, float_status *s)
+{
+return float16_to_float32(f, true, s);
+}
+
+static inline float64 float16_to_float64_ieee(float16 f, float_status *s)
+{
+return float16_to_float64(f, true, s);
+}
+
+static inline float16 float32_to_float16_ieee(float32 f, float_status *s)
+{
+return float32_to_float16(f, true, s);
+}
+
+static inline float16 float64_to_float16_ieee(float64 f, float_status *s)
+{
+return float64_to_float16(f, true, s);
+}
+
+DO_ZPZ_FP(sve_fcvt_sh, uint32_t, H1_4, float32_to_float16_ieee)
+DO_ZPZ_FP(sve_fcvt_hs, uint32_t, H1_4, float16_to_float32_ieee)
+DO_ZPZ_FP_D(sve_fcvt_dh, uint64_t, float64_to_float16_ieee)
+DO_ZPZ_FP_D(sve_fcvt_hd, uint64_t, float16_to_float64_ieee)
+DO_ZPZ_FP_D(sve_fcvt_ds, uint64_t, float64_to_float32)
+DO_ZPZ_FP_D(sve_fcvt_sd, uint64_t, float32_to_float64)
+
 DO_ZPZ_FP(sve_scvt_hh, uint16_t, H1_2, int16_to_float16)
 DO_ZPZ_FP(sve_scvt_sh, uint32_t, H1_4, int32_to_float16)
 DO_ZPZ_FP(sve_scvt_ss, uint32_t, H1_4, int32_to_float32)
diff --git a/target/arm/translate-sve.c b/target/arm/translate-sve.c
index e185af29e3..361d545965 100644
--- a/target/arm/translate-sve.c
+++ b/target/arm/translate-sve.c
@@ -3651,6 +3651,36 @@ static void do_zpz_ptr(DisasContext *s, int rd, int rn, 
int pg,
 tcg_temp_free_ptr(status);
 }
 
+static void trans_FCVT_sh(DisasContext *s, arg_rpr_esz *a, uint32_t insn)
+{
+do_zpz_ptr(s, a->rd, a->rn, a->pg, true, gen_helper_sve_fcvt_sh);
+}
+
+static void trans_FCVT_hs(DisasContext *s, arg_rpr_esz *a, uint32_t insn)
+{
+do_zpz_ptr(s, a->rd, a->rn, a->pg, false, gen_helper_sve_fcvt_hs);
+}
+
+static void trans_FCVT_dh(DisasContext *s, arg_rpr_esz *a, uint32_t insn)
+{
+do_zpz_ptr(s, a->rd, a->rn, a->pg, true, gen_helper_sve_fcvt_dh);
+}
+
+static void trans_FCVT_hd(DisasContext *s, arg_rpr_esz *a, uint32_t insn)
+{
+do_zpz_ptr(s, a->rd, a->rn, a->pg, false, gen_helper_sve_fcvt_hd);
+}
+
+static void trans_FCVT_ds(DisasContext *s, arg_rpr_esz *a, uint32_t insn)
+{
+do_zpz_ptr(s, a->rd, a->rn, a->pg, false, gen_helper_sve_fcvt_ds);
+}
+
+static void trans_FCVT_sd(DisasContext *s, arg_rpr_esz *a, uint32_t insn)
+{
+do_zpz_ptr(s, a->rd, a->rn, a->pg, false, gen_helper_sve_fcvt_sd);
+}
+
 static void trans_SCVTF_hh(DisasContext *s, arg_rpr_esz *a, uint32_t insn)
 {
 do_zpz_ptr(s, a->rd, a->rn, a->pg, true, gen_helper_sve_scvt_hh);
diff --git a/target/arm/sve.decode b/target/arm/sve.decode
index ca54895900..d44cf17fc8 100644
--- a/target/arm/sve.decode
+++ b/target/arm/sve.decode
@@ -824,6 +824,14 @@ FNMLS_zpzzz01100101 .. 1 . 111 ... . . 
@rdn_pg_rm_ra
 
 ### SVE FP Unary Operations Predicated Group
 
+# SVE floating-point convert precision
+FCVT_sh01100101 10 0010 00 101 ... . . 
@rd_pg_rn_e0
+FCVT_hs01100101 10 0010 01 101 ... . . 
@rd_pg_rn_e0
+FCVT_dh01100101 11 0010 00 101 ... . . 
@rd_pg_rn_e0
+FCVT_hd01100101 11 0010 01 101 ... . . 
@rd_pg_rn_e0
+FCVT_ds01100101 11 0010 10 101 ... . . 
@rd_pg_rn_e0
+FCVT_sd01100101 11 0010 11 101 ... 

[Qemu-devel] [PATCH v2 54/67] target/arm: Implement SVE prefetches

2018-02-17 Thread Richard Henderson
Signed-off-by: Richard Henderson 
---
 target/arm/translate-sve.c |  9 +
 target/arm/sve.decode  | 23 +++
 2 files changed, 32 insertions(+)

diff --git a/target/arm/translate-sve.c b/target/arm/translate-sve.c
index ca49b94924..63c7a0e8d8 100644
--- a/target/arm/translate-sve.c
+++ b/target/arm/translate-sve.c
@@ -3958,3 +3958,12 @@ static void trans_ST1_zprz(DisasContext *s, arg_ST1_zprz 
*a, uint32_t insn)
 do_mem_zpz(s, a->rd, a->pg, a->rm, a->scale * a->msz,
cpu_reg_sp(s, a->rn), fn);
 }
+
+/*
+ * Prefetches
+ */
+
+static void trans_PRF(DisasContext *s, arg_PRF *a, uint32_t insn)
+{
+/* Prefetch is a nop within QEMU.  */
+}
diff --git a/target/arm/sve.decode b/target/arm/sve.decode
index edd9340c02..f0144aa2d0 100644
--- a/target/arm/sve.decode
+++ b/target/arm/sve.decode
@@ -801,6 +801,29 @@ LD1RQ_zprr 1010010 .. 00 . 000 ... . . \
 LD1RQ_zpri 1010010 .. 00 0 001 ... . . \
@rpri_load_msz nreg=0
 
+# SVE 32-bit gather prefetch (scalar plus 32-bit scaled offsets)
+PRF110 00 -1 - 0-- --- - 0 
+
+# SVE 32-bit gather prefetch (vector plus immediate)
+PRF110 -- 00 - 111 --- - 0 
+
+# SVE contiguous prefetch (scalar plus immediate)
+PRF110 11 1- - 0-- --- - 0 
+
+# SVE contiguous prefetch (scalar plus scalar)
+PRF110 -- 00 - 110 --- - 0 
+
+### SVE Memory 64-bit Gather Group
+
+# SVE 64-bit gather prefetch (scalar plus 64-bit scaled offsets)
+PRF1100010 00 11 - 1-- --- - 0 
+
+# SVE 64-bit gather prefetch (scalar plus unpacked 32-bit scaled offsets)
+PRF1100010 00 -1 - 0-- --- - 0 
+
+# SVE 64-bit gather prefetch (vector plus immediate)
+PRF1100010 -- 00 - 111 --- - 0 
+
 ### SVE Memory Store Group
 
 # SVE store predicate register
-- 
2.14.3




[Qemu-devel] [PATCH v2 59/67] target/arm: Implement SVE Floating Point Multiply Indexed Group

2018-02-17 Thread Richard Henderson
Signed-off-by: Richard Henderson 
---
 target/arm/helper.h| 14 ++
 target/arm/translate-sve.c | 44 +++
 target/arm/vec_helper.c| 64 ++
 target/arm/sve.decode  | 19 ++
 4 files changed, 141 insertions(+)

diff --git a/target/arm/helper.h b/target/arm/helper.h
index f3ce58e276..a8d824b085 100644
--- a/target/arm/helper.h
+++ b/target/arm/helper.h
@@ -584,6 +584,20 @@ DEF_HELPER_FLAGS_5(gvec_ftsmul_s, TCG_CALL_NO_RWG,
 DEF_HELPER_FLAGS_5(gvec_ftsmul_d, TCG_CALL_NO_RWG,
void, ptr, ptr, ptr, ptr, i32)
 
+DEF_HELPER_FLAGS_5(gvec_fmul_idx_h, TCG_CALL_NO_RWG,
+   void, ptr, ptr, ptr, ptr, i32)
+DEF_HELPER_FLAGS_5(gvec_fmul_idx_s, TCG_CALL_NO_RWG,
+   void, ptr, ptr, ptr, ptr, i32)
+DEF_HELPER_FLAGS_5(gvec_fmul_idx_d, TCG_CALL_NO_RWG,
+   void, ptr, ptr, ptr, ptr, i32)
+
+DEF_HELPER_FLAGS_6(gvec_fmla_idx_h, TCG_CALL_NO_RWG,
+   void, ptr, ptr, ptr, ptr, ptr, i32)
+DEF_HELPER_FLAGS_6(gvec_fmla_idx_s, TCG_CALL_NO_RWG,
+   void, ptr, ptr, ptr, ptr, ptr, i32)
+DEF_HELPER_FLAGS_6(gvec_fmla_idx_d, TCG_CALL_NO_RWG,
+   void, ptr, ptr, ptr, ptr, ptr, i32)
+
 #ifdef TARGET_AARCH64
 #include "helper-a64.h"
 #include "helper-sve.h"
diff --git a/target/arm/translate-sve.c b/target/arm/translate-sve.c
index 6ce1b01b9a..cf2a4d3284 100644
--- a/target/arm/translate-sve.c
+++ b/target/arm/translate-sve.c
@@ -3136,6 +3136,50 @@ DO_ZZI(UMIN, umin)
 
 #undef DO_ZZI
 
+/*
+ *** SVE Floating Point Multiply-Add Indexed Group
+ */
+
+static void trans_FMLA_zzxz(DisasContext *s, arg_FMLA_zzxz *a, uint32_t insn)
+{
+static gen_helper_gvec_4_ptr * const fns[3] = {
+gen_helper_gvec_fmla_idx_h,
+gen_helper_gvec_fmla_idx_s,
+gen_helper_gvec_fmla_idx_d,
+};
+unsigned vsz = vec_full_reg_size(s);
+TCGv_ptr status = get_fpstatus_ptr(a->esz == MO_16);
+
+tcg_gen_gvec_4_ptr(vec_full_reg_offset(s, a->rd),
+   vec_full_reg_offset(s, a->rn),
+   vec_full_reg_offset(s, a->rm),
+   vec_full_reg_offset(s, a->ra),
+   status, vsz, vsz, a->index * 2 + a->sub,
+   fns[a->esz - 1]);
+tcg_temp_free_ptr(status);
+}
+
+/*
+ *** SVE Floating Point Multiply Indexed Group
+ */
+
+static void trans_FMUL_zzx(DisasContext *s, arg_FMUL_zzx *a, uint32_t insn)
+{
+static gen_helper_gvec_3_ptr * const fns[3] = {
+gen_helper_gvec_fmul_idx_h,
+gen_helper_gvec_fmul_idx_s,
+gen_helper_gvec_fmul_idx_d,
+};
+unsigned vsz = vec_full_reg_size(s);
+TCGv_ptr status = get_fpstatus_ptr(a->esz == MO_16);
+
+tcg_gen_gvec_3_ptr(vec_full_reg_offset(s, a->rd),
+   vec_full_reg_offset(s, a->rn),
+   vec_full_reg_offset(s, a->rm),
+   status, vsz, vsz, a->index, fns[a->esz - 1]);
+tcg_temp_free_ptr(status);
+}
+
 /*
  *** SVE Floating Point Accumulating Reduction Group
  */
diff --git a/target/arm/vec_helper.c b/target/arm/vec_helper.c
index ad5c29cdd5..e711a3217d 100644
--- a/target/arm/vec_helper.c
+++ b/target/arm/vec_helper.c
@@ -24,6 +24,22 @@
 #include "fpu/softfloat.h"
 
 
+/* Note that vector data is stored in host-endian 64-bit chunks,
+   so addressing units smaller than that needs a host-endian fixup.  */
+#ifdef HOST_WORDS_BIGENDIAN
+#define H1(x)   ((x) ^ 7)
+#define H1_2(x) ((x) ^ 6)
+#define H1_4(x) ((x) ^ 4)
+#define H2(x)   ((x) ^ 3)
+#define H4(x)   ((x) ^ 1)
+#else
+#define H1(x)   (x)
+#define H1_2(x) (x)
+#define H1_4(x) (x)
+#define H2(x)   (x)
+#define H4(x)   (x)
+#endif
+
 /* Floating-point trigonometric starting value.
  * See the ARM ARM pseudocode function FPTrigSMul.
  */
@@ -92,3 +108,51 @@ DO_3OP(gvec_rsqrts_d, helper_rsqrtsf_f64, float64)
 
 #endif
 #undef DO_3OP
+
+/* For the indexed ops, SVE applies the index per 128-bit vector segment.
+ * For AdvSIMD, there is of course only one such vector segment.
+ */
+
+#define DO_MUL_IDX(NAME, TYPE, H) \
+void HELPER(NAME)(void *vd, void *vn, void *vm, void *stat, uint32_t desc) \
+{  \
+intptr_t i, j, oprsz = simd_oprsz(desc), segment = 16 / sizeof(TYPE);  \
+intptr_t idx = simd_data(desc);\
+TYPE *d = vd, *n = vn, *m = vm;\
+for (i = 0; i < oprsz / sizeof(TYPE); i += segment) {  \
+TYPE mm = m[H(i + idx)];   \
+for (j = 0; j < segment; j++) {\
+d[i + j] = TYPE##_mul(n[i + j], mm, stat); \
+}  \
+}  

[Qemu-devel] [PATCH v2 37/67] target/arm: Implement SVE Integer Compare - Immediate Group

2018-02-17 Thread Richard Henderson
Signed-off-by: Richard Henderson 
---
 target/arm/helper-sve.h| 44 +++
 target/arm/sve_helper.c| 88 ++
 target/arm/translate-sve.c | 63 +
 target/arm/sve.decode  | 23 
 4 files changed, 218 insertions(+)

diff --git a/target/arm/helper-sve.h b/target/arm/helper-sve.h
index 6ffd1fbe8e..ae38c0a4be 100644
--- a/target/arm/helper-sve.h
+++ b/target/arm/helper-sve.h
@@ -605,6 +605,50 @@ DEF_HELPER_FLAGS_5(sve_cmplo_ppzw_s, TCG_CALL_NO_RWG,
 DEF_HELPER_FLAGS_5(sve_cmpls_ppzw_s, TCG_CALL_NO_RWG,
i32, ptr, ptr, ptr, ptr, i32)
 
+DEF_HELPER_FLAGS_4(sve_cmpeq_ppzi_b, TCG_CALL_NO_RWG, i32, ptr, ptr, ptr, i32)
+DEF_HELPER_FLAGS_4(sve_cmpne_ppzi_b, TCG_CALL_NO_RWG, i32, ptr, ptr, ptr, i32)
+DEF_HELPER_FLAGS_4(sve_cmpgt_ppzi_b, TCG_CALL_NO_RWG, i32, ptr, ptr, ptr, i32)
+DEF_HELPER_FLAGS_4(sve_cmpge_ppzi_b, TCG_CALL_NO_RWG, i32, ptr, ptr, ptr, i32)
+DEF_HELPER_FLAGS_4(sve_cmplt_ppzi_b, TCG_CALL_NO_RWG, i32, ptr, ptr, ptr, i32)
+DEF_HELPER_FLAGS_4(sve_cmple_ppzi_b, TCG_CALL_NO_RWG, i32, ptr, ptr, ptr, i32)
+DEF_HELPER_FLAGS_4(sve_cmphs_ppzi_b, TCG_CALL_NO_RWG, i32, ptr, ptr, ptr, i32)
+DEF_HELPER_FLAGS_4(sve_cmphi_ppzi_b, TCG_CALL_NO_RWG, i32, ptr, ptr, ptr, i32)
+DEF_HELPER_FLAGS_4(sve_cmplo_ppzi_b, TCG_CALL_NO_RWG, i32, ptr, ptr, ptr, i32)
+DEF_HELPER_FLAGS_4(sve_cmpls_ppzi_b, TCG_CALL_NO_RWG, i32, ptr, ptr, ptr, i32)
+
+DEF_HELPER_FLAGS_4(sve_cmpeq_ppzi_h, TCG_CALL_NO_RWG, i32, ptr, ptr, ptr, i32)
+DEF_HELPER_FLAGS_4(sve_cmpne_ppzi_h, TCG_CALL_NO_RWG, i32, ptr, ptr, ptr, i32)
+DEF_HELPER_FLAGS_4(sve_cmpgt_ppzi_h, TCG_CALL_NO_RWG, i32, ptr, ptr, ptr, i32)
+DEF_HELPER_FLAGS_4(sve_cmpge_ppzi_h, TCG_CALL_NO_RWG, i32, ptr, ptr, ptr, i32)
+DEF_HELPER_FLAGS_4(sve_cmplt_ppzi_h, TCG_CALL_NO_RWG, i32, ptr, ptr, ptr, i32)
+DEF_HELPER_FLAGS_4(sve_cmple_ppzi_h, TCG_CALL_NO_RWG, i32, ptr, ptr, ptr, i32)
+DEF_HELPER_FLAGS_4(sve_cmphs_ppzi_h, TCG_CALL_NO_RWG, i32, ptr, ptr, ptr, i32)
+DEF_HELPER_FLAGS_4(sve_cmphi_ppzi_h, TCG_CALL_NO_RWG, i32, ptr, ptr, ptr, i32)
+DEF_HELPER_FLAGS_4(sve_cmplo_ppzi_h, TCG_CALL_NO_RWG, i32, ptr, ptr, ptr, i32)
+DEF_HELPER_FLAGS_4(sve_cmpls_ppzi_h, TCG_CALL_NO_RWG, i32, ptr, ptr, ptr, i32)
+
+DEF_HELPER_FLAGS_4(sve_cmpeq_ppzi_s, TCG_CALL_NO_RWG, i32, ptr, ptr, ptr, i32)
+DEF_HELPER_FLAGS_4(sve_cmpne_ppzi_s, TCG_CALL_NO_RWG, i32, ptr, ptr, ptr, i32)
+DEF_HELPER_FLAGS_4(sve_cmpgt_ppzi_s, TCG_CALL_NO_RWG, i32, ptr, ptr, ptr, i32)
+DEF_HELPER_FLAGS_4(sve_cmpge_ppzi_s, TCG_CALL_NO_RWG, i32, ptr, ptr, ptr, i32)
+DEF_HELPER_FLAGS_4(sve_cmplt_ppzi_s, TCG_CALL_NO_RWG, i32, ptr, ptr, ptr, i32)
+DEF_HELPER_FLAGS_4(sve_cmple_ppzi_s, TCG_CALL_NO_RWG, i32, ptr, ptr, ptr, i32)
+DEF_HELPER_FLAGS_4(sve_cmphs_ppzi_s, TCG_CALL_NO_RWG, i32, ptr, ptr, ptr, i32)
+DEF_HELPER_FLAGS_4(sve_cmphi_ppzi_s, TCG_CALL_NO_RWG, i32, ptr, ptr, ptr, i32)
+DEF_HELPER_FLAGS_4(sve_cmplo_ppzi_s, TCG_CALL_NO_RWG, i32, ptr, ptr, ptr, i32)
+DEF_HELPER_FLAGS_4(sve_cmpls_ppzi_s, TCG_CALL_NO_RWG, i32, ptr, ptr, ptr, i32)
+
+DEF_HELPER_FLAGS_4(sve_cmpeq_ppzi_d, TCG_CALL_NO_RWG, i32, ptr, ptr, ptr, i32)
+DEF_HELPER_FLAGS_4(sve_cmpne_ppzi_d, TCG_CALL_NO_RWG, i32, ptr, ptr, ptr, i32)
+DEF_HELPER_FLAGS_4(sve_cmpgt_ppzi_d, TCG_CALL_NO_RWG, i32, ptr, ptr, ptr, i32)
+DEF_HELPER_FLAGS_4(sve_cmpge_ppzi_d, TCG_CALL_NO_RWG, i32, ptr, ptr, ptr, i32)
+DEF_HELPER_FLAGS_4(sve_cmplt_ppzi_d, TCG_CALL_NO_RWG, i32, ptr, ptr, ptr, i32)
+DEF_HELPER_FLAGS_4(sve_cmple_ppzi_d, TCG_CALL_NO_RWG, i32, ptr, ptr, ptr, i32)
+DEF_HELPER_FLAGS_4(sve_cmphs_ppzi_d, TCG_CALL_NO_RWG, i32, ptr, ptr, ptr, i32)
+DEF_HELPER_FLAGS_4(sve_cmphi_ppzi_d, TCG_CALL_NO_RWG, i32, ptr, ptr, ptr, i32)
+DEF_HELPER_FLAGS_4(sve_cmplo_ppzi_d, TCG_CALL_NO_RWG, i32, ptr, ptr, ptr, i32)
+DEF_HELPER_FLAGS_4(sve_cmpls_ppzi_d, TCG_CALL_NO_RWG, i32, ptr, ptr, ptr, i32)
+
 DEF_HELPER_FLAGS_5(sve_and_, TCG_CALL_NO_RWG, void, ptr, ptr, ptr, ptr, 
i32)
 DEF_HELPER_FLAGS_5(sve_bic_, TCG_CALL_NO_RWG, void, ptr, ptr, ptr, ptr, 
i32)
 DEF_HELPER_FLAGS_5(sve_eor_, TCG_CALL_NO_RWG, void, ptr, ptr, ptr, ptr, 
i32)
diff --git a/target/arm/sve_helper.c b/target/arm/sve_helper.c
index ae433861f8..b74db681f2 100644
--- a/target/arm/sve_helper.c
+++ b/target/arm/sve_helper.c
@@ -2367,3 +2367,91 @@ DO_CMP_PPZW_S(sve_cmpls_ppzw_s, uint32_t, uint64_t, <=)
 #undef DO_CMP_PPZW_H
 #undef DO_CMP_PPZW_S
 #undef DO_CMP_PPZW
+
+/* Similar, but the second source is immediate.  */
+#define DO_CMP_PPZI(NAME, TYPE, OP, H, MASK) \
+uint32_t HELPER(NAME)(void *vd, void *vn, void *vg, uint32_t desc)   \
+{\
+intptr_t opr_sz = simd_oprsz(desc);  \
+uint32_t flags = PREDTEST_INIT;  \
+TYPE mm = simd_data(desc);   \
+intptr_t i = opr_sz;

[Qemu-devel] [PATCH v2 31/67] target/arm: Implement SVE conditionally broadcast/extract element

2018-02-17 Thread Richard Henderson
Signed-off-by: Richard Henderson 
---
 target/arm/helper-sve.h|   2 +
 target/arm/sve_helper.c|  11 ++
 target/arm/translate-sve.c | 299 +
 target/arm/sve.decode  |  20 +++
 4 files changed, 332 insertions(+)

diff --git a/target/arm/helper-sve.h b/target/arm/helper-sve.h
index d977aea00d..a58fb4ba01 100644
--- a/target/arm/helper-sve.h
+++ b/target/arm/helper-sve.h
@@ -463,6 +463,8 @@ DEF_HELPER_FLAGS_4(sve_trn_d, TCG_CALL_NO_RWG, void, ptr, 
ptr, ptr, i32)
 DEF_HELPER_FLAGS_4(sve_compact_s, TCG_CALL_NO_RWG, void, ptr, ptr, ptr, i32)
 DEF_HELPER_FLAGS_4(sve_compact_d, TCG_CALL_NO_RWG, void, ptr, ptr, ptr, i32)
 
+DEF_HELPER_FLAGS_2(sve_last_active_element, TCG_CALL_NO_RWG, s32, ptr, i32)
+
 DEF_HELPER_FLAGS_5(sve_and_, TCG_CALL_NO_RWG, void, ptr, ptr, ptr, ptr, 
i32)
 DEF_HELPER_FLAGS_5(sve_bic_, TCG_CALL_NO_RWG, void, ptr, ptr, ptr, ptr, 
i32)
 DEF_HELPER_FLAGS_5(sve_eor_, TCG_CALL_NO_RWG, void, ptr, ptr, ptr, ptr, 
i32)
diff --git a/target/arm/sve_helper.c b/target/arm/sve_helper.c
index 87a1a32232..ee289be642 100644
--- a/target/arm/sve_helper.c
+++ b/target/arm/sve_helper.c
@@ -2050,3 +2050,14 @@ void HELPER(sve_compact_d)(void *vd, void *vn, void *vg, 
uint32_t desc)
 d[j] = 0;
 }
 }
+
+/* Similar to the ARM LastActiveElement pseudocode function, except the
+   result is multiplied by the element size.  This includes the not found
+   indication; e.g. not found for esz=3 is -8.  */
+int32_t HELPER(sve_last_active_element)(void *vg, uint32_t pred_desc)
+{
+intptr_t oprsz = extract32(pred_desc, 0, SIMD_OPRSZ_BITS) + 2;
+intptr_t esz = extract32(pred_desc, SIMD_DATA_SHIFT, 2);
+
+return last_active_element(vg, DIV_ROUND_UP(oprsz, 8), esz);
+}
diff --git a/target/arm/translate-sve.c b/target/arm/translate-sve.c
index 21531b259c..207a22a0bc 100644
--- a/target/arm/translate-sve.c
+++ b/target/arm/translate-sve.c
@@ -2123,6 +2123,305 @@ static void trans_COMPACT(DisasContext *s, arg_rpr_esz 
*a, uint32_t insn)
 do_zpz_ool(s, a, fns[a->esz]);
 }
 
+/* Call the helper that computes the ARM LastActiveElement pseudocode
+   function, scaled by the element size.  This includes the not found
+   indication; e.g. not found for esz=3 is -8.  */
+static void find_last_active(DisasContext *s, TCGv_i32 ret, int esz, int pg)
+{
+/* Predicate sizes may be smaller and cannot use simd_desc.  We cannot
+   round up, as we do elsewhere, because we need the exact size.  */
+TCGv_ptr t_p = tcg_temp_new_ptr();
+TCGv_i32 t_desc;
+unsigned vsz = pred_full_reg_size(s);
+unsigned desc;
+
+desc = vsz - 2;
+desc = deposit32(desc, SIMD_DATA_SHIFT, 2, esz);
+
+tcg_gen_addi_ptr(t_p, cpu_env, pred_full_reg_offset(s, pg));
+t_desc = tcg_const_i32(desc);
+
+gen_helper_sve_last_active_element(ret, t_p, t_desc);
+
+tcg_temp_free_i32(t_desc);
+tcg_temp_free_ptr(t_p);
+}
+
+/* Increment LAST to the offset of the next element in the vector,
+   wrapping around to 0.  */
+static void incr_last_active(DisasContext *s, TCGv_i32 last, int esz)
+{
+unsigned vsz = vec_full_reg_size(s);
+
+tcg_gen_addi_i32(last, last, 1 << esz);
+if (is_power_of_2(vsz)) {
+tcg_gen_andi_i32(last, last, vsz - 1);
+} else {
+TCGv_i32 max = tcg_const_i32(vsz);
+TCGv_i32 zero = tcg_const_i32(0);
+tcg_gen_movcond_i32(TCG_COND_GEU, last, last, max, zero, last);
+tcg_temp_free_i32(max);
+tcg_temp_free_i32(zero);
+}
+}
+
+/* If LAST < 0, set LAST to the offset of the last element in the vector.  */
+static void wrap_last_active(DisasContext *s, TCGv_i32 last, int esz)
+{
+unsigned vsz = vec_full_reg_size(s);
+
+if (is_power_of_2(vsz)) {
+tcg_gen_andi_i32(last, last, vsz - 1);
+} else {
+TCGv_i32 max = tcg_const_i32(vsz - (1 << esz));
+TCGv_i32 zero = tcg_const_i32(0);
+tcg_gen_movcond_i32(TCG_COND_LT, last, last, zero, max, last);
+tcg_temp_free_i32(max);
+tcg_temp_free_i32(zero);
+}
+}
+
+/* Load an unsigned element of ESZ from BASE+OFS.  */
+static TCGv_i64 load_esz(TCGv_ptr base, int ofs, int esz)
+{
+TCGv_i64 r = tcg_temp_new_i64();
+
+switch (esz) {
+case 0:
+tcg_gen_ld8u_i64(r, base, ofs);
+break;
+case 1:
+tcg_gen_ld16u_i64(r, base, ofs);
+break;
+case 2:
+tcg_gen_ld32u_i64(r, base, ofs);
+break;
+case 3:
+tcg_gen_ld_i64(r, base, ofs);
+break;
+default:
+g_assert_not_reached();
+}
+return r;
+}
+
+/* Load an unsigned element of ESZ from RM[LAST].  */
+static TCGv_i64 load_last_active(DisasContext *s, TCGv_i32 last,
+ int rm, int esz)
+{
+TCGv_ptr p = tcg_temp_new_ptr();
+TCGv_i64 r;
+
+/* Convert offset into vector into offset into ENV.
+   The final adjustment for the vector register base
+   is added via 

[Qemu-devel] [PATCH v2 52/67] target/arm: Implement SVE store vector/predicate register

2018-02-17 Thread Richard Henderson
Signed-off-by: Richard Henderson 
---
 target/arm/translate-sve.c | 101 +
 target/arm/sve.decode  |   6 +++
 2 files changed, 107 insertions(+)

diff --git a/target/arm/translate-sve.c b/target/arm/translate-sve.c
index b000a2482e..9c724980a0 100644
--- a/target/arm/translate-sve.c
+++ b/target/arm/translate-sve.c
@@ -3501,6 +3501,95 @@ static void do_ldr(DisasContext *s, uint32_t vofs, 
uint32_t len,
 tcg_temp_free_i64(t0);
 }
 
+/* Similarly for stores.  */
+static void do_str(DisasContext *s, uint32_t vofs, uint32_t len,
+   int rn, int imm)
+{
+uint32_t len_align = QEMU_ALIGN_DOWN(len, 8);
+uint32_t len_remain = len % 8;
+uint32_t nparts = len / 8 + ctpop8(len_remain);
+int midx = get_mem_index(s);
+TCGv_i64 addr, t0;
+
+addr = tcg_temp_new_i64();
+t0 = tcg_temp_new_i64();
+
+/* Note that unpredicated load/store of vector/predicate registers
+ * are defined as a stream of bytes, which equates to little-endian
+ * operations on larger quantities.  There is no nice way to force
+ * a little-endian load for aarch64_be-linux-user out of line.
+ *
+ * Attempt to keep code expansion to a minimum by limiting the
+ * amount of unrolling done.
+ */
+if (nparts <= 4) {
+int i;
+
+for (i = 0; i < len_align; i += 8) {
+tcg_gen_ld_i64(t0, cpu_env, vofs + i);
+tcg_gen_addi_i64(addr, cpu_reg_sp(s, rn), imm + i);
+tcg_gen_qemu_st_i64(t0, addr, midx, MO_LEQ);
+}
+} else {
+TCGLabel *loop = gen_new_label();
+TCGv_ptr i = TCGV_NAT_TO_PTR(glue(tcg_const_local_, ptr)(0));
+TCGv_ptr src;
+
+gen_set_label(loop);
+
+src = tcg_temp_new_ptr();
+tcg_gen_add_ptr(src, cpu_env, i);
+tcg_gen_ld_i64(t0, src, vofs);
+
+/* Minimize the number of local temps that must be re-read from
+ * the stack each iteration.  Instead, re-compute values other
+ * than the loop counter.
+ */
+tcg_gen_addi_ptr(src, i, imm);
+#if UINTPTR_MAX == UINT32_MAX
+tcg_gen_extu_i32_i64(addr, TCGV_PTR_TO_NAT(src));
+tcg_gen_add_i64(addr, addr, cpu_reg_sp(s, rn));
+#else
+tcg_gen_add_i64(addr, TCGV_PTR_TO_NAT(src), cpu_reg_sp(s, rn));
+#endif
+tcg_temp_free_ptr(src);
+
+tcg_gen_qemu_st_i64(t0, addr, midx, MO_LEQ);
+
+tcg_gen_addi_ptr(i, i, 8);
+
+glue(tcg_gen_brcondi_, ptr)(TCG_COND_LTU, TCGV_PTR_TO_NAT(i),
+   len_align, loop);
+tcg_temp_free_ptr(i);
+}
+
+/* Predicate register stores can be any multiple of 2.  */
+if (len_remain) {
+tcg_gen_ld_i64(t0, cpu_env, vofs + len_align);
+tcg_gen_addi_i64(addr, cpu_reg_sp(s, rn), imm + len_align);
+
+switch (len_remain) {
+case 2:
+case 4:
+case 8:
+tcg_gen_qemu_st_i64(t0, addr, midx, MO_LE | ctz32(len_remain));
+break;
+
+case 6:
+tcg_gen_qemu_st_i64(t0, addr, midx, MO_LEUL);
+tcg_gen_addi_i64(addr, addr, 4);
+tcg_gen_shri_i64(addr, addr, 32);
+tcg_gen_qemu_st_i64(t0, addr, midx, MO_LEUW);
+break;
+
+default:
+g_assert_not_reached();
+}
+}
+tcg_temp_free_i64(addr);
+tcg_temp_free_i64(t0);
+}
+
 #undef ptr
 
 static void trans_LDR_zri(DisasContext *s, arg_rri *a, uint32_t insn)
@@ -3515,6 +3604,18 @@ static void trans_LDR_pri(DisasContext *s, arg_rri *a, 
uint32_t insn)
 do_ldr(s, pred_full_reg_offset(s, a->rd), size, a->rn, a->imm * size);
 }
 
+static void trans_STR_zri(DisasContext *s, arg_rri *a, uint32_t insn)
+{
+int size = vec_full_reg_size(s);
+do_str(s, vec_full_reg_offset(s, a->rd), size, a->rn, a->imm * size);
+}
+
+static void trans_STR_pri(DisasContext *s, arg_rri *a, uint32_t insn)
+{
+int size = pred_full_reg_size(s);
+do_str(s, pred_full_reg_offset(s, a->rd), size, a->rn, a->imm * size);
+}
+
 /*
  *** SVE Memory - Contiguous Load Group
  */
diff --git a/target/arm/sve.decode b/target/arm/sve.decode
index 3e30985a09..5d8e1481d7 100644
--- a/target/arm/sve.decode
+++ b/target/arm/sve.decode
@@ -800,6 +800,12 @@ LD1RQ_zpri 1010010 .. 00 0 001 ... . . \
 
 ### SVE Memory Store Group
 
+# SVE store predicate register
+STR_pri1110010 11 0. . 000 ... . 0 
@pd_rn_i9
+
+# SVE store vector register
+STR_zri1110010 11 0. . 010 ... . . 
@rd_rn_i9
+
 # SVE contiguous store (scalar plus immediate)
 # ST1B, ST1H, ST1W, ST1D; require msz <= esz
 ST_zpri1110010 .. esz:2  0 111 ... . . \
-- 
2.14.3




[Qemu-devel] [PATCH v2 30/67] target/arm: Implement SVE compress active elements

2018-02-17 Thread Richard Henderson
Signed-off-by: Richard Henderson 
---
 target/arm/helper-sve.h|  3 +++
 target/arm/sve_helper.c| 34 ++
 target/arm/translate-sve.c | 12 
 target/arm/sve.decode  |  6 ++
 4 files changed, 55 insertions(+)

diff --git a/target/arm/helper-sve.h b/target/arm/helper-sve.h
index bab20345c6..d977aea00d 100644
--- a/target/arm/helper-sve.h
+++ b/target/arm/helper-sve.h
@@ -460,6 +460,9 @@ DEF_HELPER_FLAGS_4(sve_trn_h, TCG_CALL_NO_RWG, void, ptr, 
ptr, ptr, i32)
 DEF_HELPER_FLAGS_4(sve_trn_s, TCG_CALL_NO_RWG, void, ptr, ptr, ptr, i32)
 DEF_HELPER_FLAGS_4(sve_trn_d, TCG_CALL_NO_RWG, void, ptr, ptr, ptr, i32)
 
+DEF_HELPER_FLAGS_4(sve_compact_s, TCG_CALL_NO_RWG, void, ptr, ptr, ptr, i32)
+DEF_HELPER_FLAGS_4(sve_compact_d, TCG_CALL_NO_RWG, void, ptr, ptr, ptr, i32)
+
 DEF_HELPER_FLAGS_5(sve_and_, TCG_CALL_NO_RWG, void, ptr, ptr, ptr, ptr, 
i32)
 DEF_HELPER_FLAGS_5(sve_bic_, TCG_CALL_NO_RWG, void, ptr, ptr, ptr, ptr, 
i32)
 DEF_HELPER_FLAGS_5(sve_eor_, TCG_CALL_NO_RWG, void, ptr, ptr, ptr, ptr, 
i32)
diff --git a/target/arm/sve_helper.c b/target/arm/sve_helper.c
index 62982bd099..87a1a32232 100644
--- a/target/arm/sve_helper.c
+++ b/target/arm/sve_helper.c
@@ -2016,3 +2016,37 @@ DO_TRN(sve_trn_d, uint64_t, )
 #undef DO_ZIP
 #undef DO_UZP
 #undef DO_TRN
+
+void HELPER(sve_compact_s)(void *vd, void *vn, void *vg, uint32_t desc)
+{
+intptr_t i, j, opr_sz = simd_oprsz(desc) / 4;
+uint32_t *d = vd, *n = vn;
+uint8_t *pg = vg;
+
+for (i = j = 0; i < opr_sz; i++) {
+if (pg[H1(i / 2)] & (i & 1 ? 0x10 : 0x01)) {
+d[H4(j)] = n[H4(i)];
+j++;
+}
+}
+for (; j < opr_sz; j++) {
+d[H4(j)] = 0;
+}
+}
+
+void HELPER(sve_compact_d)(void *vd, void *vn, void *vg, uint32_t desc)
+{
+intptr_t i, j, opr_sz = simd_oprsz(desc) / 8;
+uint64_t *d = vd, *n = vn;
+uint8_t *pg = vg;
+
+for (i = j = 0; i < opr_sz; i++) {
+if (pg[H1(i)] & 1) {
+d[j] = n[i];
+j++;
+}
+}
+for (; j < opr_sz; j++) {
+d[j] = 0;
+}
+}
diff --git a/target/arm/translate-sve.c b/target/arm/translate-sve.c
index 09ac955a36..21531b259c 100644
--- a/target/arm/translate-sve.c
+++ b/target/arm/translate-sve.c
@@ -2111,6 +2111,18 @@ static void trans_TRN2_z(DisasContext *s, arg_rrr_esz 
*a, uint32_t insn)
 do_zzz_data_ool(s, a, 1 << a->esz, trn_fns[a->esz]);
 }
 
+/*
+ *** SVE Permute Vector - Predicated Group
+ */
+
+static void trans_COMPACT(DisasContext *s, arg_rpr_esz *a, uint32_t insn)
+{
+static gen_helper_gvec_3 * const fns[4] = {
+NULL, NULL, gen_helper_sve_compact_s, gen_helper_sve_compact_d
+};
+do_zpz_ool(s, a, fns[a->esz]);
+}
+
 /*
  *** SVE Memory - 32-bit Gather and Unsized Contiguous Group
  */
diff --git a/target/arm/sve.decode b/target/arm/sve.decode
index 2efa3773fc..a89bd37eeb 100644
--- a/target/arm/sve.decode
+++ b/target/arm/sve.decode
@@ -425,6 +425,12 @@ UZP2_z 0101 .. 1 . 011 011 . . 
@rd_rn_rm
 TRN1_z 0101 .. 1 . 011 100 . . @rd_rn_rm
 TRN2_z 0101 .. 1 . 011 101 . . @rd_rn_rm
 
+### SVE Permute - Predicated Group
+
+# SVE compress active elements
+# Note esz >= 2
+COMPACT0101 .. 11 100 ... . .  
@rd_pg_rn
+
 ### SVE Predicate Logical Operations Group
 
 # SVE predicate logical operations
-- 
2.14.3




[Qemu-devel] [PATCH v2 58/67] target/arm: Implement SVE floating-point arithmetic with immediate

2018-02-17 Thread Richard Henderson
Signed-off-by: Richard Henderson 
---
 target/arm/helper-sve.h| 56 +++
 target/arm/sve_helper.c| 68 ++
 target/arm/translate-sve.c | 73 ++
 target/arm/sve.decode  | 14 +
 4 files changed, 211 insertions(+)

diff --git a/target/arm/helper-sve.h b/target/arm/helper-sve.h
index 30373e3fc7..7ada12687b 100644
--- a/target/arm/helper-sve.h
+++ b/target/arm/helper-sve.h
@@ -809,6 +809,62 @@ DEF_HELPER_FLAGS_6(sve_fmulx_s, TCG_CALL_NO_RWG,
 DEF_HELPER_FLAGS_6(sve_fmulx_d, TCG_CALL_NO_RWG,
void, ptr, ptr, ptr, ptr, ptr, i32)
 
+DEF_HELPER_FLAGS_6(sve_fadds_h, TCG_CALL_NO_RWG,
+   void, ptr, ptr, ptr, i64, ptr, i32)
+DEF_HELPER_FLAGS_6(sve_fadds_s, TCG_CALL_NO_RWG,
+   void, ptr, ptr, ptr, i64, ptr, i32)
+DEF_HELPER_FLAGS_6(sve_fadds_d, TCG_CALL_NO_RWG,
+   void, ptr, ptr, ptr, i64, ptr, i32)
+
+DEF_HELPER_FLAGS_6(sve_fsubs_h, TCG_CALL_NO_RWG,
+   void, ptr, ptr, ptr, i64, ptr, i32)
+DEF_HELPER_FLAGS_6(sve_fsubs_s, TCG_CALL_NO_RWG,
+   void, ptr, ptr, ptr, i64, ptr, i32)
+DEF_HELPER_FLAGS_6(sve_fsubs_d, TCG_CALL_NO_RWG,
+   void, ptr, ptr, ptr, i64, ptr, i32)
+
+DEF_HELPER_FLAGS_6(sve_fmuls_h, TCG_CALL_NO_RWG,
+   void, ptr, ptr, ptr, i64, ptr, i32)
+DEF_HELPER_FLAGS_6(sve_fmuls_s, TCG_CALL_NO_RWG,
+   void, ptr, ptr, ptr, i64, ptr, i32)
+DEF_HELPER_FLAGS_6(sve_fmuls_d, TCG_CALL_NO_RWG,
+   void, ptr, ptr, ptr, i64, ptr, i32)
+
+DEF_HELPER_FLAGS_6(sve_fsubrs_h, TCG_CALL_NO_RWG,
+   void, ptr, ptr, ptr, i64, ptr, i32)
+DEF_HELPER_FLAGS_6(sve_fsubrs_s, TCG_CALL_NO_RWG,
+   void, ptr, ptr, ptr, i64, ptr, i32)
+DEF_HELPER_FLAGS_6(sve_fsubrs_d, TCG_CALL_NO_RWG,
+   void, ptr, ptr, ptr, i64, ptr, i32)
+
+DEF_HELPER_FLAGS_6(sve_fmaxnms_h, TCG_CALL_NO_RWG,
+   void, ptr, ptr, ptr, i64, ptr, i32)
+DEF_HELPER_FLAGS_6(sve_fmaxnms_s, TCG_CALL_NO_RWG,
+   void, ptr, ptr, ptr, i64, ptr, i32)
+DEF_HELPER_FLAGS_6(sve_fmaxnms_d, TCG_CALL_NO_RWG,
+   void, ptr, ptr, ptr, i64, ptr, i32)
+
+DEF_HELPER_FLAGS_6(sve_fminnms_h, TCG_CALL_NO_RWG,
+   void, ptr, ptr, ptr, i64, ptr, i32)
+DEF_HELPER_FLAGS_6(sve_fminnms_s, TCG_CALL_NO_RWG,
+   void, ptr, ptr, ptr, i64, ptr, i32)
+DEF_HELPER_FLAGS_6(sve_fminnms_d, TCG_CALL_NO_RWG,
+   void, ptr, ptr, ptr, i64, ptr, i32)
+
+DEF_HELPER_FLAGS_6(sve_fmaxs_h, TCG_CALL_NO_RWG,
+   void, ptr, ptr, ptr, i64, ptr, i32)
+DEF_HELPER_FLAGS_6(sve_fmaxs_s, TCG_CALL_NO_RWG,
+   void, ptr, ptr, ptr, i64, ptr, i32)
+DEF_HELPER_FLAGS_6(sve_fmaxs_d, TCG_CALL_NO_RWG,
+   void, ptr, ptr, ptr, i64, ptr, i32)
+
+DEF_HELPER_FLAGS_6(sve_fmins_h, TCG_CALL_NO_RWG,
+   void, ptr, ptr, ptr, i64, ptr, i32)
+DEF_HELPER_FLAGS_6(sve_fmins_s, TCG_CALL_NO_RWG,
+   void, ptr, ptr, ptr, i64, ptr, i32)
+DEF_HELPER_FLAGS_6(sve_fmins_d, TCG_CALL_NO_RWG,
+   void, ptr, ptr, ptr, i64, ptr, i32)
+
 DEF_HELPER_FLAGS_5(sve_scvt_hh, TCG_CALL_NO_RWG,
void, ptr, ptr, ptr, ptr, i32)
 DEF_HELPER_FLAGS_5(sve_scvt_sh, TCG_CALL_NO_RWG,
diff --git a/target/arm/sve_helper.c b/target/arm/sve_helper.c
index ace613684d..9378c8f0b2 100644
--- a/target/arm/sve_helper.c
+++ b/target/arm/sve_helper.c
@@ -2995,6 +2995,74 @@ DO_ZPZZ_FP_D(sve_fmulx_d, uint64_t, helper_vfp_mulxd)
 #undef DO_ZPZZ_FP
 #undef DO_ZPZZ_FP_D
 
+/* Three-operand expander, with one scalar operand, controlled by
+ * a predicate, with the extra float_status parameter.
+ */
+#define DO_ZPZS_FP(NAME, TYPE, H, OP) \
+void HELPER(NAME)(void *vd, void *vn, void *vg, uint64_t scalar,  \
+  void *status, uint32_t desc)\
+{ \
+intptr_t i, opr_sz = simd_oprsz(desc);\
+TYPE mm = scalar; \
+for (i = 0; i < opr_sz; ) {   \
+uint16_t pg = *(uint16_t *)(vg + H1_2(i >> 3));   \
+do {  \
+if (pg & 1) { \
+TYPE nn = *(TYPE *)(vn + H(i));   \
+*(TYPE *)(vd + H(i)) = OP(nn, mm, status);\
+} \
+i += sizeof(TYPE), pg >>= sizeof(TYPE);   \
+} while (i & 15); \
+} \
+}
+
+DO_ZPZS_FP(sve_fadds_h, float16, H1_2, float16_add)

[Qemu-devel] [PATCH v2 35/67] target/arm: Implement SVE Select Vectors Group

2018-02-17 Thread Richard Henderson
Signed-off-by: Richard Henderson 
---
 target/arm/helper-sve.h|  9 
 target/arm/sve_helper.c| 55 ++
 target/arm/translate-sve.c |  2 ++
 target/arm/sve.decode  |  6 +
 4 files changed, 72 insertions(+)

diff --git a/target/arm/helper-sve.h b/target/arm/helper-sve.h
index c3f8a2b502..0f57f64895 100644
--- a/target/arm/helper-sve.h
+++ b/target/arm/helper-sve.h
@@ -195,6 +195,15 @@ DEF_HELPER_FLAGS_5(sve_lsl_zpzz_s, TCG_CALL_NO_RWG,
 DEF_HELPER_FLAGS_5(sve_lsl_zpzz_d, TCG_CALL_NO_RWG,
void, ptr, ptr, ptr, ptr, i32)
 
+DEF_HELPER_FLAGS_5(sve_sel_zpzz_b, TCG_CALL_NO_RWG,
+   void, ptr, ptr, ptr, ptr, i32)
+DEF_HELPER_FLAGS_5(sve_sel_zpzz_h, TCG_CALL_NO_RWG,
+   void, ptr, ptr, ptr, ptr, i32)
+DEF_HELPER_FLAGS_5(sve_sel_zpzz_s, TCG_CALL_NO_RWG,
+   void, ptr, ptr, ptr, ptr, i32)
+DEF_HELPER_FLAGS_5(sve_sel_zpzz_d, TCG_CALL_NO_RWG,
+   void, ptr, ptr, ptr, ptr, i32)
+
 DEF_HELPER_FLAGS_5(sve_asr_zpzw_b, TCG_CALL_NO_RWG,
void, ptr, ptr, ptr, ptr, i32)
 DEF_HELPER_FLAGS_5(sve_asr_zpzw_h, TCG_CALL_NO_RWG,
diff --git a/target/arm/sve_helper.c b/target/arm/sve_helper.c
index f524a1ddce..86cd792cdf 100644
--- a/target/arm/sve_helper.c
+++ b/target/arm/sve_helper.c
@@ -2125,3 +2125,58 @@ void HELPER(sve_splice)(void *vd, void *vn, void *vm, 
void *vg, uint32_t desc)
 }
 swap_memmove(vd + len, vm, opr_sz * 8 - len);
 }
+
+void HELPER(sve_sel_zpzz_b)(void *vd, void *vn, void *vm,
+void *vg, uint32_t desc)
+{
+intptr_t i, opr_sz = simd_oprsz(desc) / 8;
+uint64_t *d = vd, *n = vn, *m = vm;
+uint8_t *pg = vg;
+
+for (i = 0; i < opr_sz; i += 1) {
+uint64_t nn = n[i], mm = m[i];
+uint64_t pp = expand_pred_b(pg[H1(i)]);
+d[i] = (nn & pp) | (mm & ~pp);
+}
+}
+
+void HELPER(sve_sel_zpzz_h)(void *vd, void *vn, void *vm,
+void *vg, uint32_t desc)
+{
+intptr_t i, opr_sz = simd_oprsz(desc) / 8;
+uint64_t *d = vd, *n = vn, *m = vm;
+uint8_t *pg = vg;
+
+for (i = 0; i < opr_sz; i += 1) {
+uint64_t nn = n[i], mm = m[i];
+uint64_t pp = expand_pred_h(pg[H1(i)]);
+d[i] = (nn & pp) | (mm & ~pp);
+}
+}
+
+void HELPER(sve_sel_zpzz_s)(void *vd, void *vn, void *vm,
+void *vg, uint32_t desc)
+{
+intptr_t i, opr_sz = simd_oprsz(desc) / 8;
+uint64_t *d = vd, *n = vn, *m = vm;
+uint8_t *pg = vg;
+
+for (i = 0; i < opr_sz; i += 1) {
+uint64_t nn = n[i], mm = m[i];
+uint64_t pp = expand_pred_s(pg[H1(i)]);
+d[i] = (nn & pp) | (mm & ~pp);
+}
+}
+
+void HELPER(sve_sel_zpzz_d)(void *vd, void *vn, void *vm,
+void *vg, uint32_t desc)
+{
+intptr_t i, opr_sz = simd_oprsz(desc) / 8;
+uint64_t *d = vd, *n = vn, *m = vm;
+uint8_t *pg = vg;
+
+for (i = 0; i < opr_sz; i += 1) {
+uint64_t nn = n[i], mm = m[i];
+d[i] = (pg[H1(i)] & 1 ? nn : mm);
+}
+}
diff --git a/target/arm/translate-sve.c b/target/arm/translate-sve.c
index 559fb41fd6..021b33ced9 100644
--- a/target/arm/translate-sve.c
+++ b/target/arm/translate-sve.c
@@ -361,6 +361,8 @@ static void trans_UDIV_zpzz(DisasContext *s, arg_rprr_esz 
*a, uint32_t insn)
 do_zpzz_ool(s, a, fns[a->esz]);
 }
 
+DO_ZPZZ(SEL, sel)
+
 #undef DO_ZPZZ
 
 /*
diff --git a/target/arm/sve.decode b/target/arm/sve.decode
index 70feb448e6..7ec84fdd80 100644
--- a/target/arm/sve.decode
+++ b/target/arm/sve.decode
@@ -99,6 +99,7 @@
_esz rn=%reg_movprfx
 @rdm_pg_rn  esz:2 ... ... ... pg:3 rn:5 rd:5 \
_esz rm=%reg_movprfx
+@rd_pg4_rn_rm   esz:2 . rm:5  .. pg:4  rn:5 rd:5   _esz
 
 # Three register operand, with governing predicate, vector element size
 @rda_pg_rn_rm   esz:2 . rm:5  ... pg:3 rn:5 rd:5 \
@@ -467,6 +468,11 @@ RBIT   0101 .. 1001 11 100 ... . . 
@rd_pg_rn
 # SVE vector splice (predicated)
 SPLICE 0101 .. 101 100 100 ... . . @rdn_pg_rm
 
+### SVE Select Vectors Group
+
+# SVE select vector elements (predicated)
+SEL_zpzz   0101 .. 1 . 11  . . @rd_pg4_rn_rm
+
 ### SVE Predicate Logical Operations Group
 
 # SVE predicate logical operations
-- 
2.14.3




[Qemu-devel] [PATCH v2 29/67] target/arm: Implement SVE Permute - Interleaving Group

2018-02-17 Thread Richard Henderson
Signed-off-by: Richard Henderson 
---
 target/arm/helper-sve.h| 15 ++
 target/arm/sve_helper.c| 72 ++
 target/arm/translate-sve.c | 69 
 target/arm/sve.decode  | 10 +++
 4 files changed, 166 insertions(+)

diff --git a/target/arm/helper-sve.h b/target/arm/helper-sve.h
index ff958fcebd..bab20345c6 100644
--- a/target/arm/helper-sve.h
+++ b/target/arm/helper-sve.h
@@ -445,6 +445,21 @@ DEF_HELPER_FLAGS_4(sve_trn_p, TCG_CALL_NO_RWG, void, ptr, 
ptr, ptr, i32)
 DEF_HELPER_FLAGS_3(sve_rev_p, TCG_CALL_NO_RWG, void, ptr, ptr, i32)
 DEF_HELPER_FLAGS_3(sve_punpk_p, TCG_CALL_NO_RWG, void, ptr, ptr, i32)
 
+DEF_HELPER_FLAGS_4(sve_zip_b, TCG_CALL_NO_RWG, void, ptr, ptr, ptr, i32)
+DEF_HELPER_FLAGS_4(sve_zip_h, TCG_CALL_NO_RWG, void, ptr, ptr, ptr, i32)
+DEF_HELPER_FLAGS_4(sve_zip_s, TCG_CALL_NO_RWG, void, ptr, ptr, ptr, i32)
+DEF_HELPER_FLAGS_4(sve_zip_d, TCG_CALL_NO_RWG, void, ptr, ptr, ptr, i32)
+
+DEF_HELPER_FLAGS_4(sve_uzp_b, TCG_CALL_NO_RWG, void, ptr, ptr, ptr, i32)
+DEF_HELPER_FLAGS_4(sve_uzp_h, TCG_CALL_NO_RWG, void, ptr, ptr, ptr, i32)
+DEF_HELPER_FLAGS_4(sve_uzp_s, TCG_CALL_NO_RWG, void, ptr, ptr, ptr, i32)
+DEF_HELPER_FLAGS_4(sve_uzp_d, TCG_CALL_NO_RWG, void, ptr, ptr, ptr, i32)
+
+DEF_HELPER_FLAGS_4(sve_trn_b, TCG_CALL_NO_RWG, void, ptr, ptr, ptr, i32)
+DEF_HELPER_FLAGS_4(sve_trn_h, TCG_CALL_NO_RWG, void, ptr, ptr, ptr, i32)
+DEF_HELPER_FLAGS_4(sve_trn_s, TCG_CALL_NO_RWG, void, ptr, ptr, ptr, i32)
+DEF_HELPER_FLAGS_4(sve_trn_d, TCG_CALL_NO_RWG, void, ptr, ptr, ptr, i32)
+
 DEF_HELPER_FLAGS_5(sve_and_, TCG_CALL_NO_RWG, void, ptr, ptr, ptr, ptr, 
i32)
 DEF_HELPER_FLAGS_5(sve_bic_, TCG_CALL_NO_RWG, void, ptr, ptr, ptr, ptr, 
i32)
 DEF_HELPER_FLAGS_5(sve_eor_, TCG_CALL_NO_RWG, void, ptr, ptr, ptr, ptr, 
i32)
diff --git a/target/arm/sve_helper.c b/target/arm/sve_helper.c
index c3a2706a16..62982bd099 100644
--- a/target/arm/sve_helper.c
+++ b/target/arm/sve_helper.c
@@ -1944,3 +1944,75 @@ void HELPER(sve_punpk_p)(void *vd, void *vn, uint32_t 
pred_desc)
 }
 }
 }
+
+#define DO_ZIP(NAME, TYPE, H) \
+void HELPER(NAME)(void *vd, void *vn, void *vm, uint32_t desc)   \
+{\
+intptr_t oprsz = simd_oprsz(desc);   \
+intptr_t i, oprsz_2 = oprsz / 2; \
+ARMVectorReg tmp_n, tmp_m;   \
+/* We produce output faster than we consume input.   \
+   Therefore we must be mindful of possible overlap.  */ \
+if (unlikely((vn - vd) < (uintptr_t)oprsz)) {\
+vn = memcpy(_n, vn, oprsz_2);\
+}\
+if (unlikely((vm - vd) < (uintptr_t)oprsz)) {\
+vm = memcpy(_m, vm, oprsz_2);\
+}\
+for (i = 0; i < oprsz_2; i += sizeof(TYPE)) {\
+*(TYPE *)(vd + H(2 * i + 0)) = *(TYPE *)(vn + H(i)); \
+*(TYPE *)(vd + H(2 * i + sizeof(TYPE))) = *(TYPE *)(vm + H(i)); \
+}\
+}
+
+DO_ZIP(sve_zip_b, uint8_t, H1)
+DO_ZIP(sve_zip_h, uint16_t, H1_2)
+DO_ZIP(sve_zip_s, uint32_t, H1_4)
+DO_ZIP(sve_zip_d, uint64_t, )
+
+#define DO_UZP(NAME, TYPE, H) \
+void HELPER(NAME)(void *vd, void *vn, void *vm, uint32_t desc) \
+{  \
+intptr_t oprsz = simd_oprsz(desc); \
+intptr_t oprsz_2 = oprsz / 2;  \
+intptr_t odd_ofs = simd_data(desc);\
+intptr_t i;\
+ARMVectorReg tmp_m;\
+if (unlikely((vm - vd) < (uintptr_t)oprsz)) {  \
+vm = memcpy(_m, vm, oprsz);\
+}  \
+for (i = 0; i < oprsz_2; i += sizeof(TYPE)) {  \
+*(TYPE *)(vd + H(i)) = *(TYPE *)(vn + H(2 * i + odd_ofs)); \
+}  \
+for (i = 0; i < oprsz_2; i += sizeof(TYPE)) {  \
+*(TYPE *)(vd + H(oprsz_2 + i)) = *(TYPE *)(vm + H(2 * i + odd_ofs)); \
+}  \
+}
+
+DO_UZP(sve_uzp_b, uint8_t, H1)
+DO_UZP(sve_uzp_h, uint16_t, H1_2)
+DO_UZP(sve_uzp_s, uint32_t, H1_4)
+DO_UZP(sve_uzp_d, uint64_t, )
+
+#define DO_TRN(NAME, TYPE, H) \
+void HELPER(NAME)(void *vd, void *vn, void *vm, 

[Qemu-devel] [PATCH v2 34/67] target/arm: Implement SVE vector splice (predicated)

2018-02-17 Thread Richard Henderson
Signed-off-by: Richard Henderson 
---
 target/arm/helper-sve.h|  2 ++
 target/arm/sve_helper.c| 37 +
 target/arm/translate-sve.c | 10 ++
 target/arm/sve.decode  |  3 +++
 4 files changed, 52 insertions(+)

diff --git a/target/arm/helper-sve.h b/target/arm/helper-sve.h
index 3b7c54905d..c3f8a2b502 100644
--- a/target/arm/helper-sve.h
+++ b/target/arm/helper-sve.h
@@ -479,6 +479,8 @@ DEF_HELPER_FLAGS_4(sve_rbit_h, TCG_CALL_NO_RWG, void, ptr, 
ptr, ptr, i32)
 DEF_HELPER_FLAGS_4(sve_rbit_s, TCG_CALL_NO_RWG, void, ptr, ptr, ptr, i32)
 DEF_HELPER_FLAGS_4(sve_rbit_d, TCG_CALL_NO_RWG, void, ptr, ptr, ptr, i32)
 
+DEF_HELPER_FLAGS_5(sve_splice, TCG_CALL_NO_RWG, void, ptr, ptr, ptr, ptr, i32)
+
 DEF_HELPER_FLAGS_5(sve_and_, TCG_CALL_NO_RWG, void, ptr, ptr, ptr, ptr, 
i32)
 DEF_HELPER_FLAGS_5(sve_bic_, TCG_CALL_NO_RWG, void, ptr, ptr, ptr, ptr, 
i32)
 DEF_HELPER_FLAGS_5(sve_eor_, TCG_CALL_NO_RWG, void, ptr, ptr, ptr, ptr, 
i32)
diff --git a/target/arm/sve_helper.c b/target/arm/sve_helper.c
index a67bb579b8..f524a1ddce 100644
--- a/target/arm/sve_helper.c
+++ b/target/arm/sve_helper.c
@@ -2088,3 +2088,40 @@ int32_t HELPER(sve_last_active_element)(void *vg, 
uint32_t pred_desc)
 
 return last_active_element(vg, DIV_ROUND_UP(oprsz, 8), esz);
 }
+
+void HELPER(sve_splice)(void *vd, void *vn, void *vm, void *vg, uint32_t desc)
+{
+intptr_t opr_sz = simd_oprsz(desc) / 8;
+int esz = simd_data(desc);
+uint64_t pg, first_g, last_g, len, mask = pred_esz_masks[esz];
+intptr_t i, first_i, last_i;
+ARMVectorReg tmp;
+
+first_i = last_i = 0;
+first_g = last_g = 0;
+
+/* Find the extent of the active elements within VG.  */
+for (i = QEMU_ALIGN_UP(opr_sz, 8) - 8; i >= 0; i -= 8) {
+pg = *(uint64_t *)(vg + i) & mask;
+if (pg) {
+if (last_g == 0) {
+last_g = pg;
+last_i = i;
+}
+first_g = pg;
+first_i = i;
+}
+}
+
+len = 0;
+if (first_g != 0) {
+first_i = first_i * 8 + ctz64(first_g);
+last_i = last_i * 8 + 63 - clz64(last_g);
+len = last_i - first_i + (1 << esz);
+if (vd == vm) {
+vm = memcpy(, vm, opr_sz * 8);
+}
+swap_memmove(vd, vn + first_i, len);
+}
+swap_memmove(vd + len, vm, opr_sz * 8 - len);
+}
diff --git a/target/arm/translate-sve.c b/target/arm/translate-sve.c
index 5a1ed379ad..559fb41fd6 100644
--- a/target/arm/translate-sve.c
+++ b/target/arm/translate-sve.c
@@ -2473,6 +2473,16 @@ static void trans_RBIT(DisasContext *s, arg_rpr_esz *a, 
uint32_t insn)
 do_zpz_ool(s, a, fns[a->esz]);
 }
 
+static void trans_SPLICE(DisasContext *s, arg_rprr_esz *a, uint32_t insn)
+{
+unsigned vsz = vec_full_reg_size(s);
+tcg_gen_gvec_4_ool(vec_full_reg_offset(s, a->rd),
+   vec_full_reg_offset(s, a->rn),
+   vec_full_reg_offset(s, a->rm),
+   pred_full_reg_offset(s, a->pg),
+   vsz, vsz, a->esz, gen_helper_sve_splice);
+}
+
 /*
  *** SVE Memory - 32-bit Gather and Unsized Contiguous Group
  */
diff --git a/target/arm/sve.decode b/target/arm/sve.decode
index 8903fb6592..70feb448e6 100644
--- a/target/arm/sve.decode
+++ b/target/arm/sve.decode
@@ -464,6 +464,9 @@ REVH0101 .. 1001 01 100 ... . . 
@rd_pg_rn
 REVW   0101 .. 1001 10 100 ... . . @rd_pg_rn
 RBIT   0101 .. 1001 11 100 ... . . @rd_pg_rn
 
+# SVE vector splice (predicated)
+SPLICE 0101 .. 101 100 100 ... . . @rdn_pg_rm
+
 ### SVE Predicate Logical Operations Group
 
 # SVE predicate logical operations
-- 
2.14.3




[Qemu-devel] [PATCH v2 53/67] target/arm: Implement SVE scatter stores

2018-02-17 Thread Richard Henderson
Signed-off-by: Richard Henderson 
---
 target/arm/helper-sve.h| 41 ++
 target/arm/sve_helper.c| 62 
 target/arm/translate-sve.c | 71 ++
 target/arm/sve.decode  | 39 +
 4 files changed, 213 insertions(+)

diff --git a/target/arm/helper-sve.h b/target/arm/helper-sve.h
index 6c640a92ff..b5c093f2fd 100644
--- a/target/arm/helper-sve.h
+++ b/target/arm/helper-sve.h
@@ -918,3 +918,44 @@ DEF_HELPER_FLAGS_4(sve_st1hs_r, TCG_CALL_NO_WG, void, env, 
ptr, tl, i32)
 DEF_HELPER_FLAGS_4(sve_st1hd_r, TCG_CALL_NO_WG, void, env, ptr, tl, i32)
 
 DEF_HELPER_FLAGS_4(sve_st1sd_r, TCG_CALL_NO_WG, void, env, ptr, tl, i32)
+
+DEF_HELPER_FLAGS_6(sve_stbs_zsu, TCG_CALL_NO_WG,
+   void, env, ptr, ptr, ptr, tl, i32)
+DEF_HELPER_FLAGS_6(sve_sths_zsu, TCG_CALL_NO_WG,
+   void, env, ptr, ptr, ptr, tl, i32)
+DEF_HELPER_FLAGS_6(sve_stss_zsu, TCG_CALL_NO_WG,
+   void, env, ptr, ptr, ptr, tl, i32)
+
+DEF_HELPER_FLAGS_6(sve_stbs_zss, TCG_CALL_NO_WG,
+   void, env, ptr, ptr, ptr, tl, i32)
+DEF_HELPER_FLAGS_6(sve_sths_zss, TCG_CALL_NO_WG,
+   void, env, ptr, ptr, ptr, tl, i32)
+DEF_HELPER_FLAGS_6(sve_stss_zss, TCG_CALL_NO_WG,
+   void, env, ptr, ptr, ptr, tl, i32)
+
+DEF_HELPER_FLAGS_6(sve_stbd_zsu, TCG_CALL_NO_WG,
+   void, env, ptr, ptr, ptr, tl, i32)
+DEF_HELPER_FLAGS_6(sve_sthd_zsu, TCG_CALL_NO_WG,
+   void, env, ptr, ptr, ptr, tl, i32)
+DEF_HELPER_FLAGS_6(sve_stsd_zsu, TCG_CALL_NO_WG,
+   void, env, ptr, ptr, ptr, tl, i32)
+DEF_HELPER_FLAGS_6(sve_stdd_zsu, TCG_CALL_NO_WG,
+   void, env, ptr, ptr, ptr, tl, i32)
+
+DEF_HELPER_FLAGS_6(sve_stbd_zss, TCG_CALL_NO_WG,
+   void, env, ptr, ptr, ptr, tl, i32)
+DEF_HELPER_FLAGS_6(sve_sthd_zss, TCG_CALL_NO_WG,
+   void, env, ptr, ptr, ptr, tl, i32)
+DEF_HELPER_FLAGS_6(sve_stsd_zss, TCG_CALL_NO_WG,
+   void, env, ptr, ptr, ptr, tl, i32)
+DEF_HELPER_FLAGS_6(sve_stdd_zss, TCG_CALL_NO_WG,
+   void, env, ptr, ptr, ptr, tl, i32)
+
+DEF_HELPER_FLAGS_6(sve_stbd_zd, TCG_CALL_NO_WG,
+   void, env, ptr, ptr, ptr, tl, i32)
+DEF_HELPER_FLAGS_6(sve_sthd_zd, TCG_CALL_NO_WG,
+   void, env, ptr, ptr, ptr, tl, i32)
+DEF_HELPER_FLAGS_6(sve_stsd_zd, TCG_CALL_NO_WG,
+   void, env, ptr, ptr, ptr, tl, i32)
+DEF_HELPER_FLAGS_6(sve_stdd_zd, TCG_CALL_NO_WG,
+   void, env, ptr, ptr, ptr, tl, i32)
diff --git a/target/arm/sve_helper.c b/target/arm/sve_helper.c
index a7dc6f6164..07b3d285f2 100644
--- a/target/arm/sve_helper.c
+++ b/target/arm/sve_helper.c
@@ -3545,3 +3545,65 @@ void HELPER(sve_st4dd_r)(CPUARMState *env, void *vg,
 addr += 4 * 8;
 }
 }
+
+/* Stores with a vector index.  */
+
+#define DO_ST1_ZPZ_S(NAME, TYPEI, FN)   \
+void HELPER(NAME)(CPUARMState *env, void *vd, void *vg, void *vm,   \
+  target_ulong base, uint32_t desc) \
+{   \
+intptr_t i, oprsz = simd_oprsz(desc) / 8;   \
+unsigned scale = simd_data(desc);   \
+uintptr_t ra = GETPC(); \
+uint32_t *d = vd; TYPEI *m = vm; uint8_t *pg = vg;  \
+for (i = 0; i < oprsz; i++) {   \
+uint8_t pp = pg[H1(i)]; \
+if (pp & 0x01) {\
+target_ulong off = (target_ulong)m[H4(i * 2)] << scale; \
+FN(env, base + off, d[H4(i * 2)], ra);  \
+}   \
+if (pp & 0x10) {\
+target_ulong off = (target_ulong)m[H4(i * 2 + 1)] << scale; \
+FN(env, base + off, d[H4(i * 2 + 1)], ra);  \
+}   \
+}   \
+}
+
+#define DO_ST1_ZPZ_D(NAME, TYPEI, FN)   \
+void HELPER(NAME)(CPUARMState *env, void *vd, void *vg, void *vm,   \
+  target_ulong base, uint32_t desc) \
+{   \
+intptr_t i, oprsz = simd_oprsz(desc) / 8;   \
+unsigned scale = simd_data(desc);   \
+uintptr_t ra = GETPC(); \
+uint64_t *d = vd, *m = vm; uint8_t *pg = vg;

[Qemu-devel] [PATCH v2 44/67] target/arm: Implement SVE Memory Contiguous Load Group

2018-02-17 Thread Richard Henderson
Signed-off-by: Richard Henderson 
---
 target/arm/helper-sve.h|  35 +++
 target/arm/sve_helper.c| 235 +
 target/arm/translate-sve.c | 130 +
 target/arm/sve.decode  |  44 -
 4 files changed, 442 insertions(+), 2 deletions(-)

diff --git a/target/arm/helper-sve.h b/target/arm/helper-sve.h
index 2e76084992..fcc9ba5f50 100644
--- a/target/arm/helper-sve.h
+++ b/target/arm/helper-sve.h
@@ -719,3 +719,38 @@ DEF_HELPER_FLAGS_5(gvec_rsqrts_s, TCG_CALL_NO_RWG,
void, ptr, ptr, ptr, ptr, i32)
 DEF_HELPER_FLAGS_5(gvec_rsqrts_d, TCG_CALL_NO_RWG,
void, ptr, ptr, ptr, ptr, i32)
+
+DEF_HELPER_FLAGS_4(sve_ld1bb_r, TCG_CALL_NO_WG, void, env, ptr, tl, i32)
+DEF_HELPER_FLAGS_4(sve_ld2bb_r, TCG_CALL_NO_WG, void, env, ptr, tl, i32)
+DEF_HELPER_FLAGS_4(sve_ld3bb_r, TCG_CALL_NO_WG, void, env, ptr, tl, i32)
+DEF_HELPER_FLAGS_4(sve_ld4bb_r, TCG_CALL_NO_WG, void, env, ptr, tl, i32)
+
+DEF_HELPER_FLAGS_4(sve_ld1hh_r, TCG_CALL_NO_WG, void, env, ptr, tl, i32)
+DEF_HELPER_FLAGS_4(sve_ld2hh_r, TCG_CALL_NO_WG, void, env, ptr, tl, i32)
+DEF_HELPER_FLAGS_4(sve_ld3hh_r, TCG_CALL_NO_WG, void, env, ptr, tl, i32)
+DEF_HELPER_FLAGS_4(sve_ld4hh_r, TCG_CALL_NO_WG, void, env, ptr, tl, i32)
+
+DEF_HELPER_FLAGS_4(sve_ld1ss_r, TCG_CALL_NO_WG, void, env, ptr, tl, i32)
+DEF_HELPER_FLAGS_4(sve_ld2ss_r, TCG_CALL_NO_WG, void, env, ptr, tl, i32)
+DEF_HELPER_FLAGS_4(sve_ld3ss_r, TCG_CALL_NO_WG, void, env, ptr, tl, i32)
+DEF_HELPER_FLAGS_4(sve_ld4ss_r, TCG_CALL_NO_WG, void, env, ptr, tl, i32)
+
+DEF_HELPER_FLAGS_4(sve_ld1dd_r, TCG_CALL_NO_WG, void, env, ptr, tl, i32)
+DEF_HELPER_FLAGS_4(sve_ld2dd_r, TCG_CALL_NO_WG, void, env, ptr, tl, i32)
+DEF_HELPER_FLAGS_4(sve_ld3dd_r, TCG_CALL_NO_WG, void, env, ptr, tl, i32)
+DEF_HELPER_FLAGS_4(sve_ld4dd_r, TCG_CALL_NO_WG, void, env, ptr, tl, i32)
+
+DEF_HELPER_FLAGS_4(sve_ld1bhu_r, TCG_CALL_NO_WG, void, env, ptr, tl, i32)
+DEF_HELPER_FLAGS_4(sve_ld1bsu_r, TCG_CALL_NO_WG, void, env, ptr, tl, i32)
+DEF_HELPER_FLAGS_4(sve_ld1bdu_r, TCG_CALL_NO_WG, void, env, ptr, tl, i32)
+DEF_HELPER_FLAGS_4(sve_ld1bhs_r, TCG_CALL_NO_WG, void, env, ptr, tl, i32)
+DEF_HELPER_FLAGS_4(sve_ld1bss_r, TCG_CALL_NO_WG, void, env, ptr, tl, i32)
+DEF_HELPER_FLAGS_4(sve_ld1bds_r, TCG_CALL_NO_WG, void, env, ptr, tl, i32)
+
+DEF_HELPER_FLAGS_4(sve_ld1hsu_r, TCG_CALL_NO_WG, void, env, ptr, tl, i32)
+DEF_HELPER_FLAGS_4(sve_ld1hdu_r, TCG_CALL_NO_WG, void, env, ptr, tl, i32)
+DEF_HELPER_FLAGS_4(sve_ld1hss_r, TCG_CALL_NO_WG, void, env, ptr, tl, i32)
+DEF_HELPER_FLAGS_4(sve_ld1hds_r, TCG_CALL_NO_WG, void, env, ptr, tl, i32)
+
+DEF_HELPER_FLAGS_4(sve_ld1sdu_r, TCG_CALL_NO_WG, void, env, ptr, tl, i32)
+DEF_HELPER_FLAGS_4(sve_ld1sds_r, TCG_CALL_NO_WG, void, env, ptr, tl, i32)
diff --git a/target/arm/sve_helper.c b/target/arm/sve_helper.c
index 4f45f11bff..e542725113 100644
--- a/target/arm/sve_helper.c
+++ b/target/arm/sve_helper.c
@@ -2788,3 +2788,238 @@ uint32_t HELPER(sve_while)(void *vd, uint32_t count, 
uint32_t pred_desc)
 
 return predtest_ones(d, oprsz, esz_mask);
 }
+
+/*
+ * Load contiguous data, protected by a governing predicate.
+ */
+#define DO_LD1(NAME, FN, TYPEE, TYPEM, H)  \
+void HELPER(NAME)(CPUARMState *env, void *vg,  \
+  target_ulong addr, uint32_t desc)\
+{  \
+intptr_t i, oprsz = simd_oprsz(desc);  \
+intptr_t ra = GETPC(); \
+unsigned rd = simd_data(desc); \
+void *vd = >vfp.zregs[rd];\
+for (i = 0; i < oprsz; ) { \
+uint16_t pg = *(uint16_t *)(vg + H1_2(i >> 3));\
+do {   \
+TYPEM m = 0;   \
+if (pg & 1) {  \
+m = FN(env, addr, ra); \
+}  \
+*(TYPEE *)(vd + H(i)) = m; \
+i += sizeof(TYPEE), pg >>= sizeof(TYPEE);  \
+addr += sizeof(TYPEM); \
+} while (i & 15);  \
+}  \
+}
+
+#define DO_LD1_D(NAME, FN, TYPEM)  \
+void HELPER(NAME)(CPUARMState *env, void *vg,  \
+  target_ulong addr, uint32_t desc)\
+{  \
+intptr_t i, oprsz = simd_oprsz(desc) / 8;  \
+intptr_t ra = GETPC(); \
+unsigned rd = simd_data(desc); \
+uint64_t *d = >vfp.zregs[rd].d[0];\
+uint8_t *pg = vg;  

[Qemu-devel] [PATCH v2 32/67] target/arm: Implement SVE copy to vector (predicated)

2018-02-17 Thread Richard Henderson
Signed-off-by: Richard Henderson 
---
 target/arm/translate-sve.c | 13 +
 target/arm/sve.decode  |  6 ++
 2 files changed, 19 insertions(+)

diff --git a/target/arm/translate-sve.c b/target/arm/translate-sve.c
index 207a22a0bc..fc2a295ab7 100644
--- a/target/arm/translate-sve.c
+++ b/target/arm/translate-sve.c
@@ -2422,6 +2422,19 @@ static void trans_LASTB_r(DisasContext *s, arg_rpr_esz 
*a, uint32_t insn)
 do_last_general(s, a, true);
 }
 
+static void trans_CPY_m_r(DisasContext *s, arg_rpr_esz *a, uint32_t insn)
+{
+do_cpy_m(s, a->esz, a->rd, a->rd, a->pg, cpu_reg_sp(s, a->rn));
+}
+
+static void trans_CPY_m_v(DisasContext *s, arg_rpr_esz *a, uint32_t insn)
+{
+int ofs = vec_reg_offset(s, a->rn, 0, a->esz);
+TCGv_i64 t = load_esz(cpu_env, ofs, a->esz);
+do_cpy_m(s, a->esz, a->rd, a->rd, a->pg, t);
+tcg_temp_free_i64(t);
+}
+
 /*
  *** SVE Memory - 32-bit Gather and Unsized Contiguous Group
  */
diff --git a/target/arm/sve.decode b/target/arm/sve.decode
index 1370802c12..5e127de88c 100644
--- a/target/arm/sve.decode
+++ b/target/arm/sve.decode
@@ -451,6 +451,12 @@ LASTB_v0101 .. 10001 1 100 ... . . 
@rd_pg_rn
 LASTA_r0101 .. 1 0 101 ... . . 
@rd_pg_rn
 LASTB_r0101 .. 1 1 101 ... . . 
@rd_pg_rn
 
+# SVE copy element from SIMD scalar register
+CPY_m_v0101 .. 10 100 ... . .  
@rd_pg_rn
+
+# SVE copy element from general register to vector (predicated)
+CPY_m_r0101 .. 101000 101 ... . .  
@rd_pg_rn
+
 ### SVE Predicate Logical Operations Group
 
 # SVE predicate logical operations
-- 
2.14.3




[Qemu-devel] [PATCH v2 24/67] target/arm: Implement SVE Bitwise Immediate Group

2018-02-17 Thread Richard Henderson
Signed-off-by: Richard Henderson 
---
 target/arm/translate-sve.c | 50 ++
 target/arm/sve.decode  | 17 
 2 files changed, 67 insertions(+)

diff --git a/target/arm/translate-sve.c b/target/arm/translate-sve.c
index 702f20e97b..21b1e4df85 100644
--- a/target/arm/translate-sve.c
+++ b/target/arm/translate-sve.c
@@ -34,6 +34,8 @@
 #include "translate-a64.h"
 
 typedef void GVecGen2Fn(unsigned, uint32_t, uint32_t, uint32_t, uint32_t);
+typedef void GVecGen2iFn(unsigned, uint32_t, uint32_t,
+ int64_t, uint32_t, uint32_t);
 typedef void GVecGen3Fn(unsigned, uint32_t, uint32_t,
 uint32_t, uint32_t, uint32_t);
 
@@ -1648,6 +1650,54 @@ static void trans_SINCDEC_v(DisasContext *s, 
arg_incdec2_cnt *a,
 }
 }
 
+/*
+ *** SVE Bitwise Immediate Group
+ */
+
+static void do_zz_dbm(DisasContext *s, arg_rr_dbm *a, GVecGen2iFn *gvec_fn)
+{
+unsigned vsz;
+uint64_t imm;
+
+if (!logic_imm_decode_wmask(, extract32(a->dbm, 12, 1),
+extract32(a->dbm, 0, 6),
+extract32(a->dbm, 6, 6))) {
+unallocated_encoding(s);
+return;
+}
+
+vsz = vec_full_reg_size(s);
+gvec_fn(MO_64, vec_full_reg_offset(s, a->rd),
+vec_full_reg_offset(s, a->rn), imm, vsz, vsz);
+}
+
+static void trans_AND_zzi(DisasContext *s, arg_rr_dbm *a, uint32_t insn)
+{
+do_zz_dbm(s, a, tcg_gen_gvec_andi);
+}
+
+static void trans_ORR_zzi(DisasContext *s, arg_rr_dbm *a, uint32_t insn)
+{
+do_zz_dbm(s, a, tcg_gen_gvec_ori);
+}
+
+static void trans_EOR_zzi(DisasContext *s, arg_rr_dbm *a, uint32_t insn)
+{
+do_zz_dbm(s, a, tcg_gen_gvec_xori);
+}
+
+static void trans_DUPM(DisasContext *s, arg_DUPM *a, uint32_t insn)
+{
+uint64_t imm;
+if (!logic_imm_decode_wmask(, extract32(a->dbm, 12, 1),
+extract32(a->dbm, 0, 6),
+extract32(a->dbm, 6, 6))) {
+unallocated_encoding(s);
+return;
+}
+do_dupi_z(s, a->rd, imm);
+}
+
 /*
  *** SVE Memory - 32-bit Gather and Unsized Contiguous Group
  */
diff --git a/target/arm/sve.decode b/target/arm/sve.decode
index 5690b5fcb9..0990d135f4 100644
--- a/target/arm/sve.decode
+++ b/target/arm/sve.decode
@@ -50,6 +50,7 @@
 
 _eszrd rn esz
rd rn imm
+_dbmrd rn dbm
   rd rn rm imm
 _esz   rd rn imm esz
 _esz   rd rn rm esz
@@ -112,6 +113,10 @@
 @rd_rn_tszimm   .. ... ... .. rn:5 rd:5 \
_esz esz=%tszimm16_esz
 
+# Two register operand, one encoded bitmask.
+@rdn_dbm    ..  dbm:13 rd:5 \
+   _dbm rn=%reg_movprfx
+
 # Basic Load/Store with 9-bit immediate offset
 @pd_rn_i9    .. rn:5 . rd:4\
 imm=%imm9_16_10
@@ -331,6 +336,18 @@ INCDEC_v   0100 .. 1 1  1100 0 d:1 . .
@incdec2_cnt u=1
 # Note these require esz != 0.
 SINCDEC_v  0100 .. 1 0  1100 d:1 u:1 . .   @incdec2_cnt
 
+### SVE Bitwise Immediate Group
+
+# SVE bitwise logical with immediate (unpredicated)
+ORR_zzi0101 00  . .@rdn_dbm
+EOR_zzi0101 01  . .@rdn_dbm
+AND_zzi0101 10  . .@rdn_dbm
+
+# SVE broadcast bitmask immediate
+DUPM   0101 11  dbm:13 rd:5
+
+### SVE Predicate Logical Operations Group
+
 # SVE predicate logical operations
 AND_   00100101 0. 00  01  0  0    @pd_pg_pn_pm_s
 BIC_   00100101 0. 00  01  0  1    @pd_pg_pn_pm_s
-- 
2.14.3




[Qemu-devel] [PATCH v2 48/67] target/arm: Implement SVE floating-point arithmetic (predicated)

2018-02-17 Thread Richard Henderson
Signed-off-by: Richard Henderson 
---
 target/arm/helper-sve.h|  77 
 target/arm/sve_helper.c| 107 +
 target/arm/translate-sve.c |  47 
 target/arm/sve.decode  |  17 +++
 4 files changed, 248 insertions(+)

diff --git a/target/arm/helper-sve.h b/target/arm/helper-sve.h
index fb7609f9ef..84d0a8978c 100644
--- a/target/arm/helper-sve.h
+++ b/target/arm/helper-sve.h
@@ -720,6 +720,83 @@ DEF_HELPER_FLAGS_5(gvec_rsqrts_s, TCG_CALL_NO_RWG,
 DEF_HELPER_FLAGS_5(gvec_rsqrts_d, TCG_CALL_NO_RWG,
void, ptr, ptr, ptr, ptr, i32)
 
+DEF_HELPER_FLAGS_6(sve_fadd_h, TCG_CALL_NO_RWG,
+   void, ptr, ptr, ptr, ptr, ptr, i32)
+DEF_HELPER_FLAGS_6(sve_fadd_s, TCG_CALL_NO_RWG,
+   void, ptr, ptr, ptr, ptr, ptr, i32)
+DEF_HELPER_FLAGS_6(sve_fadd_d, TCG_CALL_NO_RWG,
+   void, ptr, ptr, ptr, ptr, ptr, i32)
+
+DEF_HELPER_FLAGS_6(sve_fsub_h, TCG_CALL_NO_RWG,
+   void, ptr, ptr, ptr, ptr, ptr, i32)
+DEF_HELPER_FLAGS_6(sve_fsub_s, TCG_CALL_NO_RWG,
+   void, ptr, ptr, ptr, ptr, ptr, i32)
+DEF_HELPER_FLAGS_6(sve_fsub_d, TCG_CALL_NO_RWG,
+   void, ptr, ptr, ptr, ptr, ptr, i32)
+
+DEF_HELPER_FLAGS_6(sve_fmul_h, TCG_CALL_NO_RWG,
+   void, ptr, ptr, ptr, ptr, ptr, i32)
+DEF_HELPER_FLAGS_6(sve_fmul_s, TCG_CALL_NO_RWG,
+   void, ptr, ptr, ptr, ptr, ptr, i32)
+DEF_HELPER_FLAGS_6(sve_fmul_d, TCG_CALL_NO_RWG,
+   void, ptr, ptr, ptr, ptr, ptr, i32)
+
+DEF_HELPER_FLAGS_6(sve_fdiv_h, TCG_CALL_NO_RWG,
+   void, ptr, ptr, ptr, ptr, ptr, i32)
+DEF_HELPER_FLAGS_6(sve_fdiv_s, TCG_CALL_NO_RWG,
+   void, ptr, ptr, ptr, ptr, ptr, i32)
+DEF_HELPER_FLAGS_6(sve_fdiv_d, TCG_CALL_NO_RWG,
+   void, ptr, ptr, ptr, ptr, ptr, i32)
+
+DEF_HELPER_FLAGS_6(sve_fmin_h, TCG_CALL_NO_RWG,
+   void, ptr, ptr, ptr, ptr, ptr, i32)
+DEF_HELPER_FLAGS_6(sve_fmin_s, TCG_CALL_NO_RWG,
+   void, ptr, ptr, ptr, ptr, ptr, i32)
+DEF_HELPER_FLAGS_6(sve_fmin_d, TCG_CALL_NO_RWG,
+   void, ptr, ptr, ptr, ptr, ptr, i32)
+
+DEF_HELPER_FLAGS_6(sve_fmax_h, TCG_CALL_NO_RWG,
+   void, ptr, ptr, ptr, ptr, ptr, i32)
+DEF_HELPER_FLAGS_6(sve_fmax_s, TCG_CALL_NO_RWG,
+   void, ptr, ptr, ptr, ptr, ptr, i32)
+DEF_HELPER_FLAGS_6(sve_fmax_d, TCG_CALL_NO_RWG,
+   void, ptr, ptr, ptr, ptr, ptr, i32)
+
+DEF_HELPER_FLAGS_6(sve_fminnum_h, TCG_CALL_NO_RWG,
+   void, ptr, ptr, ptr, ptr, ptr, i32)
+DEF_HELPER_FLAGS_6(sve_fminnum_s, TCG_CALL_NO_RWG,
+   void, ptr, ptr, ptr, ptr, ptr, i32)
+DEF_HELPER_FLAGS_6(sve_fminnum_d, TCG_CALL_NO_RWG,
+   void, ptr, ptr, ptr, ptr, ptr, i32)
+
+DEF_HELPER_FLAGS_6(sve_fmaxnum_h, TCG_CALL_NO_RWG,
+   void, ptr, ptr, ptr, ptr, ptr, i32)
+DEF_HELPER_FLAGS_6(sve_fmaxnum_s, TCG_CALL_NO_RWG,
+   void, ptr, ptr, ptr, ptr, ptr, i32)
+DEF_HELPER_FLAGS_6(sve_fmaxnum_d, TCG_CALL_NO_RWG,
+   void, ptr, ptr, ptr, ptr, ptr, i32)
+
+DEF_HELPER_FLAGS_6(sve_fabd_h, TCG_CALL_NO_RWG,
+   void, ptr, ptr, ptr, ptr, ptr, i32)
+DEF_HELPER_FLAGS_6(sve_fabd_s, TCG_CALL_NO_RWG,
+   void, ptr, ptr, ptr, ptr, ptr, i32)
+DEF_HELPER_FLAGS_6(sve_fabd_d, TCG_CALL_NO_RWG,
+   void, ptr, ptr, ptr, ptr, ptr, i32)
+
+DEF_HELPER_FLAGS_6(sve_fscalbn_h, TCG_CALL_NO_RWG,
+   void, ptr, ptr, ptr, ptr, ptr, i32)
+DEF_HELPER_FLAGS_6(sve_fscalbn_s, TCG_CALL_NO_RWG,
+   void, ptr, ptr, ptr, ptr, ptr, i32)
+DEF_HELPER_FLAGS_6(sve_fscalbn_d, TCG_CALL_NO_RWG,
+   void, ptr, ptr, ptr, ptr, ptr, i32)
+
+DEF_HELPER_FLAGS_6(sve_fmulx_h, TCG_CALL_NO_RWG,
+   void, ptr, ptr, ptr, ptr, ptr, i32)
+DEF_HELPER_FLAGS_6(sve_fmulx_s, TCG_CALL_NO_RWG,
+   void, ptr, ptr, ptr, ptr, ptr, i32)
+DEF_HELPER_FLAGS_6(sve_fmulx_d, TCG_CALL_NO_RWG,
+   void, ptr, ptr, ptr, ptr, ptr, i32)
+
 DEF_HELPER_FLAGS_5(sve_scvt_hh, TCG_CALL_NO_RWG,
void, ptr, ptr, ptr, ptr, i32)
 DEF_HELPER_FLAGS_5(sve_scvt_sh, TCG_CALL_NO_RWG,
diff --git a/target/arm/sve_helper.c b/target/arm/sve_helper.c
index a1e0ceb5fb..d80babfae7 100644
--- a/target/arm/sve_helper.c
+++ b/target/arm/sve_helper.c
@@ -2789,6 +2789,113 @@ uint32_t HELPER(sve_while)(void *vd, uint32_t count, 
uint32_t pred_desc)
 return predtest_ones(d, oprsz, esz_mask);
 }
 
+/* Fully general three-operand expander, controlled by a predicate,
+ * With the extra float_status parameter.
+ */
+#define DO_ZPZZ_FP(NAME, TYPE, H, OP)   \
+void HELPER(NAME)(void *vd, void *vn, void *vm, void *vg,   \
+  void *status, uint32_t desc)  \
+{ 

[Qemu-devel] [PATCH v2 41/67] target/arm: Implement FDUP/DUP

2018-02-17 Thread Richard Henderson
Signed-off-by: Richard Henderson 
---
 target/arm/translate-sve.c | 35 +++
 target/arm/sve.decode  |  8 
 2 files changed, 43 insertions(+)

diff --git a/target/arm/translate-sve.c b/target/arm/translate-sve.c
index 4b92a55c21..7571d02237 100644
--- a/target/arm/translate-sve.c
+++ b/target/arm/translate-sve.c
@@ -2939,6 +2939,41 @@ static void trans_WHILE(DisasContext *s, arg_WHILE *a, 
uint32_t insn)
 tcg_temp_free_i32(t3);
 }
 
+/*
+ *** SVE Integer Wide Immediate - Unpredicated Group
+ */
+
+static void trans_FDUP(DisasContext *s, arg_FDUP *a, uint32_t insn)
+{
+unsigned vsz = vec_full_reg_size(s);
+int dofs = vec_full_reg_offset(s, a->rd);
+uint64_t imm;
+
+if (a->esz == 0) {
+unallocated_encoding(s);
+return;
+}
+
+/* Decode the VFP immediate.  */
+imm = vfp_expand_imm(a->esz, a->imm);
+imm = dup_const(a->esz, imm);
+
+tcg_gen_gvec_dup64i(dofs, vsz, vsz, imm);
+}
+
+static void trans_DUP_i(DisasContext *s, arg_DUP_i *a, uint32_t insn)
+{
+unsigned vsz = vec_full_reg_size(s);
+int dofs = vec_full_reg_offset(s, a->rd);
+
+if (a->esz == 0 && extract32(insn, 13, 1)) {
+unallocated_encoding(s);
+return;
+}
+
+tcg_gen_gvec_dup64i(dofs, vsz, vsz, dup_const(a->esz, a->imm));
+}
+
 /*
  *** SVE Memory - 32-bit Gather and Unsized Contiguous Group
  */
diff --git a/target/arm/sve.decode b/target/arm/sve.decode
index b5bc7e9546..ea1bfe7579 100644
--- a/target/arm/sve.decode
+++ b/target/arm/sve.decode
@@ -622,6 +622,14 @@ CTERM  00100101 1 sf:1 1 rm:5 001000 rn:5 ne:1 

 # SVE integer compare scalar count and limit
 WHILE  00100101 esz:2 1 rm:5 000 sf:1 u:1 1 rn:5 eq:1 rd:4
 
+### SVE Integer Wide Immediate - Unpredicated Group
+
+# SVE broadcast floating-point immediate (unpredicated)
+FDUP   00100101 esz:2 111 00 1110 imm:8 rd:5
+
+# SVE broadcast integer immediate (unpredicated)
+DUP_i  00100101 esz:2 111 00 011 .  rd:5   imm=%sh8_i8s
+
 ### SVE Memory - 32-bit Gather and Unsized Contiguous Group
 
 # SVE load predicate register
-- 
2.14.3




[Qemu-devel] [PATCH v2 22/67] target/arm: Implement SVE floating-point trig select coefficient

2018-02-17 Thread Richard Henderson
Signed-off-by: Richard Henderson 
---
 target/arm/helper-sve.h|  4 
 target/arm/sve_helper.c| 43 +++
 target/arm/translate-sve.c | 19 +++
 target/arm/sve.decode  |  4 
 4 files changed, 70 insertions(+)

diff --git a/target/arm/helper-sve.h b/target/arm/helper-sve.h
index e2925ff8ec..4f1bd5a62f 100644
--- a/target/arm/helper-sve.h
+++ b/target/arm/helper-sve.h
@@ -389,6 +389,10 @@ DEF_HELPER_FLAGS_3(sve_fexpa_h, TCG_CALL_NO_RWG, void, 
ptr, ptr, i32)
 DEF_HELPER_FLAGS_3(sve_fexpa_s, TCG_CALL_NO_RWG, void, ptr, ptr, i32)
 DEF_HELPER_FLAGS_3(sve_fexpa_d, TCG_CALL_NO_RWG, void, ptr, ptr, i32)
 
+DEF_HELPER_FLAGS_4(sve_ftssel_h, TCG_CALL_NO_RWG, void, ptr, ptr, ptr, i32)
+DEF_HELPER_FLAGS_4(sve_ftssel_s, TCG_CALL_NO_RWG, void, ptr, ptr, ptr, i32)
+DEF_HELPER_FLAGS_4(sve_ftssel_d, TCG_CALL_NO_RWG, void, ptr, ptr, ptr, i32)
+
 DEF_HELPER_FLAGS_5(sve_and_, TCG_CALL_NO_RWG, void, ptr, ptr, ptr, ptr, 
i32)
 DEF_HELPER_FLAGS_5(sve_bic_, TCG_CALL_NO_RWG, void, ptr, ptr, ptr, ptr, 
i32)
 DEF_HELPER_FLAGS_5(sve_eor_, TCG_CALL_NO_RWG, void, ptr, ptr, ptr, ptr, 
i32)
diff --git a/target/arm/sve_helper.c b/target/arm/sve_helper.c
index 4d42653eef..b4f70af23f 100644
--- a/target/arm/sve_helper.c
+++ b/target/arm/sve_helper.c
@@ -23,6 +23,7 @@
 #include "exec/cpu_ldst.h"
 #include "exec/helper-proto.h"
 #include "tcg/tcg-gvec-desc.h"
+#include "fpu/softfloat.h"
 
 
 /* Note that vector data is stored in host-endian 64-bit chunks,
@@ -1182,3 +1183,45 @@ void HELPER(sve_fexpa_d)(void *vd, void *vn, uint32_t 
desc)
 d[i] = coeff[idx] | (exp << 52);
 }
 }
+
+void HELPER(sve_ftssel_h)(void *vd, void *vn, void *vm, uint32_t desc)
+{
+intptr_t i, opr_sz = simd_oprsz(desc) / 2;
+uint16_t *d = vd, *n = vn, *m = vm;
+for (i = 0; i < opr_sz; i += 1) {
+uint16_t nn = n[i];
+uint16_t mm = m[i];
+if (mm & 1) {
+nn = float16_one;
+}
+d[i] = nn ^ (mm & 2) << 14;
+}
+}
+
+void HELPER(sve_ftssel_s)(void *vd, void *vn, void *vm, uint32_t desc)
+{
+intptr_t i, opr_sz = simd_oprsz(desc) / 4;
+uint32_t *d = vd, *n = vn, *m = vm;
+for (i = 0; i < opr_sz; i += 1) {
+uint32_t nn = n[i];
+uint32_t mm = m[i];
+if (mm & 1) {
+nn = float32_one;
+}
+d[i] = nn ^ (mm & 2) << 30;
+}
+}
+
+void HELPER(sve_ftssel_d)(void *vd, void *vn, void *vm, uint32_t desc)
+{
+intptr_t i, opr_sz = simd_oprsz(desc) / 8;
+uint64_t *d = vd, *n = vn, *m = vm;
+for (i = 0; i < opr_sz; i += 1) {
+uint64_t nn = n[i];
+uint64_t mm = m[i];
+if (mm & 1) {
+nn = float64_one;
+}
+d[i] = nn ^ (mm & 2) << 62;
+}
+}
diff --git a/target/arm/translate-sve.c b/target/arm/translate-sve.c
index 2f23f1b192..e32be385fd 100644
--- a/target/arm/translate-sve.c
+++ b/target/arm/translate-sve.c
@@ -902,6 +902,25 @@ static void trans_FEXPA(DisasContext *s, arg_rr_esz *a, 
uint32_t insn)
vsz, vsz, 0, fns[a->esz]);
 }
 
+static void trans_FTSSEL(DisasContext *s, arg_rrr_esz *a, uint32_t insn)
+{
+static gen_helper_gvec_3 * const fns[4] = {
+NULL,
+gen_helper_sve_ftssel_h,
+gen_helper_sve_ftssel_s,
+gen_helper_sve_ftssel_d,
+};
+unsigned vsz = vec_full_reg_size(s);
+if (a->esz == 0) {
+unallocated_encoding(s);
+return;
+}
+tcg_gen_gvec_3_ool(vec_full_reg_offset(s, a->rd),
+   vec_full_reg_offset(s, a->rn),
+   vec_full_reg_offset(s, a->rm),
+   vsz, vsz, 0, fns[a->esz]);
+}
+
 /*
  *** SVE Predicate Logical Operations Group
  */
diff --git a/target/arm/sve.decode b/target/arm/sve.decode
index e791fe8031..4ea3f33919 100644
--- a/target/arm/sve.decode
+++ b/target/arm/sve.decode
@@ -297,6 +297,10 @@ ADR_p640100 11 1 . 1010 .. . . 
@rd_rn_msz_rm
 # Note esz != 0
 FEXPA  0100 .. 1 0 101110 . .  @rd_rn
 
+# SVE floating-point trig select coefficient
+# Note esz != 0
+FTSSEL 0100 .. 1 . 101100 . .  @rd_rn_rm
+
 ### SVE Predicate Logical Operations Group
 
 # SVE predicate logical operations
-- 
2.14.3




[Qemu-devel] [PATCH v2 25/67] target/arm: Implement SVE Integer Wide Immediate - Predicated Group

2018-02-17 Thread Richard Henderson
Signed-off-by: Richard Henderson 
---
 target/arm/helper-sve.h|  10 +
 target/arm/sve_helper.c| 108 +
 target/arm/translate-sve.c |  92 ++
 target/arm/sve.decode  |  17 +++
 4 files changed, 227 insertions(+)

diff --git a/target/arm/helper-sve.h b/target/arm/helper-sve.h
index 2831e1643b..79493ab647 100644
--- a/target/arm/helper-sve.h
+++ b/target/arm/helper-sve.h
@@ -404,6 +404,16 @@ DEF_HELPER_FLAGS_4(sve_uqaddi_s, TCG_CALL_NO_RWG, void, 
ptr, ptr, s64, i32)
 DEF_HELPER_FLAGS_4(sve_uqaddi_d, TCG_CALL_NO_RWG, void, ptr, ptr, i64, i32)
 DEF_HELPER_FLAGS_4(sve_uqsubi_d, TCG_CALL_NO_RWG, void, ptr, ptr, i64, i32)
 
+DEF_HELPER_FLAGS_5(sve_cpy_m_b, TCG_CALL_NO_RWG, void, ptr, ptr, ptr, i64, i32)
+DEF_HELPER_FLAGS_5(sve_cpy_m_h, TCG_CALL_NO_RWG, void, ptr, ptr, ptr, i64, i32)
+DEF_HELPER_FLAGS_5(sve_cpy_m_s, TCG_CALL_NO_RWG, void, ptr, ptr, ptr, i64, i32)
+DEF_HELPER_FLAGS_5(sve_cpy_m_d, TCG_CALL_NO_RWG, void, ptr, ptr, ptr, i64, i32)
+
+DEF_HELPER_FLAGS_4(sve_cpy_z_b, TCG_CALL_NO_RWG, void, ptr, ptr, i64, i32)
+DEF_HELPER_FLAGS_4(sve_cpy_z_h, TCG_CALL_NO_RWG, void, ptr, ptr, i64, i32)
+DEF_HELPER_FLAGS_4(sve_cpy_z_s, TCG_CALL_NO_RWG, void, ptr, ptr, i64, i32)
+DEF_HELPER_FLAGS_4(sve_cpy_z_d, TCG_CALL_NO_RWG, void, ptr, ptr, i64, i32)
+
 DEF_HELPER_FLAGS_5(sve_and_, TCG_CALL_NO_RWG, void, ptr, ptr, ptr, ptr, 
i32)
 DEF_HELPER_FLAGS_5(sve_bic_, TCG_CALL_NO_RWG, void, ptr, ptr, ptr, ptr, 
i32)
 DEF_HELPER_FLAGS_5(sve_eor_, TCG_CALL_NO_RWG, void, ptr, ptr, ptr, ptr, 
i32)
diff --git a/target/arm/sve_helper.c b/target/arm/sve_helper.c
index cfda16d520..6a95d1ec48 100644
--- a/target/arm/sve_helper.c
+++ b/target/arm/sve_helper.c
@@ -1361,3 +1361,111 @@ void HELPER(sve_uqsubi_d)(void *d, void *a, uint64_t b, 
uint32_t desc)
 *(uint64_t *)(d + i) = (ai < b ? 0 : ai - b);
 }
 }
+
+/* Two operand predicated copy immediate with merge.  All valid immediates
+ * can fit within 17 signed bits in the simd_data field.
+ */
+void HELPER(sve_cpy_m_b)(void *vd, void *vn, void *vg,
+ uint64_t mm, uint32_t desc)
+{
+intptr_t i, opr_sz = simd_oprsz(desc) / 8;
+uint64_t *d = vd, *n = vn;
+uint8_t *pg = vg;
+
+mm = (mm & 0xff) * (-1ull / 0xff);
+for (i = 0; i < opr_sz; i += 1) {
+uint64_t nn = n[i];
+uint64_t pp = expand_pred_b(pg[H1(i)]);
+d[i] = (mm & pp) | (nn & ~pp);
+}
+}
+
+void HELPER(sve_cpy_m_h)(void *vd, void *vn, void *vg,
+ uint64_t mm, uint32_t desc)
+{
+intptr_t i, opr_sz = simd_oprsz(desc) / 8;
+uint64_t *d = vd, *n = vn;
+uint8_t *pg = vg;
+
+mm = (mm & 0x) * (-1ull / 0x);
+for (i = 0; i < opr_sz; i += 1) {
+uint64_t nn = n[i];
+uint64_t pp = expand_pred_h(pg[H1(i)]);
+d[i] = (mm & pp) | (nn & ~pp);
+}
+}
+
+void HELPER(sve_cpy_m_s)(void *vd, void *vn, void *vg,
+ uint64_t mm, uint32_t desc)
+{
+intptr_t i, opr_sz = simd_oprsz(desc) / 8;
+uint64_t *d = vd, *n = vn;
+uint8_t *pg = vg;
+
+mm = deposit64(mm, 32, 32, mm);
+for (i = 0; i < opr_sz; i += 1) {
+uint64_t nn = n[i];
+uint64_t pp = expand_pred_s(pg[H1(i)]);
+d[i] = (mm & pp) | (nn & ~pp);
+}
+}
+
+void HELPER(sve_cpy_m_d)(void *vd, void *vn, void *vg,
+ uint64_t mm, uint32_t desc)
+{
+intptr_t i, opr_sz = simd_oprsz(desc) / 8;
+uint64_t *d = vd, *n = vn;
+uint8_t *pg = vg;
+
+for (i = 0; i < opr_sz; i += 1) {
+uint64_t nn = n[i];
+d[i] = (pg[H1(i)] & 1 ? mm : nn);
+}
+}
+
+void HELPER(sve_cpy_z_b)(void *vd, void *vg, uint64_t val, uint32_t desc)
+{
+intptr_t i, opr_sz = simd_oprsz(desc) / 8;
+uint64_t *d = vd;
+uint8_t *pg = vg;
+
+val = (val & 0xff) * (-1ull / 0xff);
+for (i = 0; i < opr_sz; i += 1) {
+d[i] = val & expand_pred_b(pg[H1(i)]);
+}
+}
+
+void HELPER(sve_cpy_z_h)(void *vd, void *vg, uint64_t val, uint32_t desc)
+{
+intptr_t i, opr_sz = simd_oprsz(desc) / 8;
+uint64_t *d = vd;
+uint8_t *pg = vg;
+
+val = (val & 0x) * (-1ull / 0x);
+for (i = 0; i < opr_sz; i += 1) {
+d[i] = val & expand_pred_h(pg[H1(i)]);
+}
+}
+
+void HELPER(sve_cpy_z_s)(void *vd, void *vg, uint64_t val, uint32_t desc)
+{
+intptr_t i, opr_sz = simd_oprsz(desc) / 8;
+uint64_t *d = vd;
+uint8_t *pg = vg;
+
+val = deposit64(val, 32, 32, val);
+for (i = 0; i < opr_sz; i += 1) {
+d[i] = val & expand_pred_s(pg[H1(i)]);
+}
+}
+
+void HELPER(sve_cpy_z_d)(void *vd, void *vg, uint64_t val, uint32_t desc)
+{
+intptr_t i, opr_sz = simd_oprsz(desc) / 8;
+uint64_t *d = vd;
+uint8_t *pg = vg;
+
+for (i = 0; i < opr_sz; i += 1) {
+d[i] = (pg[H1(i)] & 1 ? val : 0);
+}
+}
diff --git a/target/arm/translate-sve.c 

[Qemu-devel] [PATCH v2 47/67] target/arm: Implement SVE integer convert to floating-point

2018-02-17 Thread Richard Henderson
Signed-off-by: Richard Henderson 
---
 target/arm/helper-sve.h| 30 +++
 target/arm/sve_helper.c| 52 ++
 target/arm/translate-sve.c | 92 ++
 target/arm/sve.decode  | 22 +++
 4 files changed, 196 insertions(+)

diff --git a/target/arm/helper-sve.h b/target/arm/helper-sve.h
index 74c2d642a3..fb7609f9ef 100644
--- a/target/arm/helper-sve.h
+++ b/target/arm/helper-sve.h
@@ -720,6 +720,36 @@ DEF_HELPER_FLAGS_5(gvec_rsqrts_s, TCG_CALL_NO_RWG,
 DEF_HELPER_FLAGS_5(gvec_rsqrts_d, TCG_CALL_NO_RWG,
void, ptr, ptr, ptr, ptr, i32)
 
+DEF_HELPER_FLAGS_5(sve_scvt_hh, TCG_CALL_NO_RWG,
+   void, ptr, ptr, ptr, ptr, i32)
+DEF_HELPER_FLAGS_5(sve_scvt_sh, TCG_CALL_NO_RWG,
+   void, ptr, ptr, ptr, ptr, i32)
+DEF_HELPER_FLAGS_5(sve_scvt_dh, TCG_CALL_NO_RWG,
+   void, ptr, ptr, ptr, ptr, i32)
+DEF_HELPER_FLAGS_5(sve_scvt_ss, TCG_CALL_NO_RWG,
+   void, ptr, ptr, ptr, ptr, i32)
+DEF_HELPER_FLAGS_5(sve_scvt_sd, TCG_CALL_NO_RWG,
+   void, ptr, ptr, ptr, ptr, i32)
+DEF_HELPER_FLAGS_5(sve_scvt_ds, TCG_CALL_NO_RWG,
+   void, ptr, ptr, ptr, ptr, i32)
+DEF_HELPER_FLAGS_5(sve_scvt_dd, TCG_CALL_NO_RWG,
+   void, ptr, ptr, ptr, ptr, i32)
+
+DEF_HELPER_FLAGS_5(sve_ucvt_hh, TCG_CALL_NO_RWG,
+   void, ptr, ptr, ptr, ptr, i32)
+DEF_HELPER_FLAGS_5(sve_ucvt_sh, TCG_CALL_NO_RWG,
+   void, ptr, ptr, ptr, ptr, i32)
+DEF_HELPER_FLAGS_5(sve_ucvt_dh, TCG_CALL_NO_RWG,
+   void, ptr, ptr, ptr, ptr, i32)
+DEF_HELPER_FLAGS_5(sve_ucvt_ss, TCG_CALL_NO_RWG,
+   void, ptr, ptr, ptr, ptr, i32)
+DEF_HELPER_FLAGS_5(sve_ucvt_sd, TCG_CALL_NO_RWG,
+   void, ptr, ptr, ptr, ptr, i32)
+DEF_HELPER_FLAGS_5(sve_ucvt_ds, TCG_CALL_NO_RWG,
+   void, ptr, ptr, ptr, ptr, i32)
+DEF_HELPER_FLAGS_5(sve_ucvt_dd, TCG_CALL_NO_RWG,
+   void, ptr, ptr, ptr, ptr, i32)
+
 DEF_HELPER_FLAGS_4(sve_ld1bb_r, TCG_CALL_NO_WG, void, env, ptr, tl, i32)
 DEF_HELPER_FLAGS_4(sve_ld2bb_r, TCG_CALL_NO_WG, void, env, ptr, tl, i32)
 DEF_HELPER_FLAGS_4(sve_ld3bb_r, TCG_CALL_NO_WG, void, env, ptr, tl, i32)
diff --git a/target/arm/sve_helper.c b/target/arm/sve_helper.c
index e259e910de..a1e0ceb5fb 100644
--- a/target/arm/sve_helper.c
+++ b/target/arm/sve_helper.c
@@ -2789,6 +2789,58 @@ uint32_t HELPER(sve_while)(void *vd, uint32_t count, 
uint32_t pred_desc)
 return predtest_ones(d, oprsz, esz_mask);
 }
 
+/* Fully general two-operand expander, controlled by a predicate,
+ * With the extra float_status parameter.
+ */
+#define DO_ZPZ_FP(NAME, TYPE, H, OP)\
+void HELPER(NAME)(void *vd, void *vn, void *vg, void *status, uint32_t desc) \
+{   \
+intptr_t i, opr_sz = simd_oprsz(desc);  \
+for (i = 0; i < opr_sz; ) { \
+uint16_t pg = *(uint16_t *)(vg + H1_2(i >> 3)); \
+do {\
+if (pg & 1) {   \
+TYPE nn = *(TYPE *)(vn + H(i)); \
+*(TYPE *)(vd + H(i)) = OP(nn, status);  \
+}   \
+i += sizeof(TYPE), pg >>= sizeof(TYPE); \
+} while (i & 15);   \
+}   \
+}
+
+/* Similarly, specialized for 64-bit operands.  */
+#define DO_ZPZ_FP_D(NAME, TYPE, OP) \
+void HELPER(NAME)(void *vd, void *vn, void *vg, void *status, uint32_t desc) \
+{   \
+intptr_t i, opr_sz = simd_oprsz(desc) / 8;  \
+TYPE *d = vd, *n = vn;  \
+uint8_t *pg = vg;   \
+for (i = 0; i < opr_sz; i += 1) {   \
+if (pg[H1(i)] & 1) {\
+d[i] = OP(n[i], status);\
+}   \
+}   \
+}
+
+DO_ZPZ_FP(sve_scvt_hh, uint16_t, H1_2, int16_to_float16)
+DO_ZPZ_FP(sve_scvt_sh, uint32_t, H1_4, int32_to_float16)
+DO_ZPZ_FP(sve_scvt_ss, uint32_t, H1_4, int32_to_float32)
+DO_ZPZ_FP_D(sve_scvt_sd, uint64_t, int32_to_float64)
+DO_ZPZ_FP_D(sve_scvt_dh, uint64_t, int64_to_float16)
+DO_ZPZ_FP_D(sve_scvt_ds, uint64_t, int64_to_float32)
+DO_ZPZ_FP_D(sve_scvt_dd, uint64_t, int64_to_float64)
+
+DO_ZPZ_FP(sve_ucvt_hh, uint16_t, H1_2, uint16_to_float16)
+DO_ZPZ_FP(sve_ucvt_sh, uint32_t, H1_4, 

[Qemu-devel] [PATCH v2 23/67] target/arm: Implement SVE Element Count Group

2018-02-17 Thread Richard Henderson
Signed-off-by: Richard Henderson 
---
 target/arm/helper-sve.h|  11 ++
 target/arm/sve_helper.c| 136 ++
 target/arm/translate-sve.c | 274 -
 target/arm/sve.decode  |  30 -
 4 files changed, 448 insertions(+), 3 deletions(-)

diff --git a/target/arm/helper-sve.h b/target/arm/helper-sve.h
index 4f1bd5a62f..2831e1643b 100644
--- a/target/arm/helper-sve.h
+++ b/target/arm/helper-sve.h
@@ -393,6 +393,17 @@ DEF_HELPER_FLAGS_4(sve_ftssel_h, TCG_CALL_NO_RWG, void, 
ptr, ptr, ptr, i32)
 DEF_HELPER_FLAGS_4(sve_ftssel_s, TCG_CALL_NO_RWG, void, ptr, ptr, ptr, i32)
 DEF_HELPER_FLAGS_4(sve_ftssel_d, TCG_CALL_NO_RWG, void, ptr, ptr, ptr, i32)
 
+DEF_HELPER_FLAGS_4(sve_sqaddi_b, TCG_CALL_NO_RWG, void, ptr, ptr, s32, i32)
+DEF_HELPER_FLAGS_4(sve_sqaddi_h, TCG_CALL_NO_RWG, void, ptr, ptr, s32, i32)
+DEF_HELPER_FLAGS_4(sve_sqaddi_s, TCG_CALL_NO_RWG, void, ptr, ptr, s64, i32)
+DEF_HELPER_FLAGS_4(sve_sqaddi_d, TCG_CALL_NO_RWG, void, ptr, ptr, s64, i32)
+
+DEF_HELPER_FLAGS_4(sve_uqaddi_b, TCG_CALL_NO_RWG, void, ptr, ptr, s32, i32)
+DEF_HELPER_FLAGS_4(sve_uqaddi_h, TCG_CALL_NO_RWG, void, ptr, ptr, s32, i32)
+DEF_HELPER_FLAGS_4(sve_uqaddi_s, TCG_CALL_NO_RWG, void, ptr, ptr, s64, i32)
+DEF_HELPER_FLAGS_4(sve_uqaddi_d, TCG_CALL_NO_RWG, void, ptr, ptr, i64, i32)
+DEF_HELPER_FLAGS_4(sve_uqsubi_d, TCG_CALL_NO_RWG, void, ptr, ptr, i64, i32)
+
 DEF_HELPER_FLAGS_5(sve_and_, TCG_CALL_NO_RWG, void, ptr, ptr, ptr, ptr, 
i32)
 DEF_HELPER_FLAGS_5(sve_bic_, TCG_CALL_NO_RWG, void, ptr, ptr, ptr, ptr, 
i32)
 DEF_HELPER_FLAGS_5(sve_eor_, TCG_CALL_NO_RWG, void, ptr, ptr, ptr, ptr, 
i32)
diff --git a/target/arm/sve_helper.c b/target/arm/sve_helper.c
index b4f70af23f..cfda16d520 100644
--- a/target/arm/sve_helper.c
+++ b/target/arm/sve_helper.c
@@ -1225,3 +1225,139 @@ void HELPER(sve_ftssel_d)(void *vd, void *vn, void *vm, 
uint32_t desc)
 d[i] = nn ^ (mm & 2) << 62;
 }
 }
+
+/*
+ * Signed saturating addition with scalar operand.
+ */
+
+void HELPER(sve_sqaddi_b)(void *d, void *a, int32_t b, uint32_t desc)
+{
+intptr_t i, oprsz = simd_oprsz(desc);
+
+for (i = 0; i < oprsz; i += sizeof(int8_t)) {
+int r = *(int8_t *)(a + i) + b;
+if (r > INT8_MAX) {
+r = INT8_MAX;
+} else if (r < INT8_MIN) {
+r = INT8_MIN;
+}
+*(int8_t *)(d + i) = r;
+}
+}
+
+void HELPER(sve_sqaddi_h)(void *d, void *a, int32_t b, uint32_t desc)
+{
+intptr_t i, oprsz = simd_oprsz(desc);
+
+for (i = 0; i < oprsz; i += sizeof(int16_t)) {
+int r = *(int16_t *)(a + i) + b;
+if (r > INT16_MAX) {
+r = INT16_MAX;
+} else if (r < INT16_MIN) {
+r = INT16_MIN;
+}
+*(int16_t *)(d + i) = r;
+}
+}
+
+void HELPER(sve_sqaddi_s)(void *d, void *a, int64_t b, uint32_t desc)
+{
+intptr_t i, oprsz = simd_oprsz(desc);
+
+for (i = 0; i < oprsz; i += sizeof(int32_t)) {
+int64_t r = *(int32_t *)(a + i) + b;
+if (r > INT32_MAX) {
+r = INT32_MAX;
+} else if (r < INT32_MIN) {
+r = INT32_MIN;
+}
+*(int32_t *)(d + i) = r;
+}
+}
+
+void HELPER(sve_sqaddi_d)(void *d, void *a, int64_t b, uint32_t desc)
+{
+intptr_t i, oprsz = simd_oprsz(desc);
+
+for (i = 0; i < oprsz; i += sizeof(int64_t)) {
+int64_t ai = *(int64_t *)(a + i);
+int64_t r = ai + b;
+if (((r ^ ai) & ~(ai ^ b)) < 0) {
+/* Signed overflow.  */
+r = (r < 0 ? INT64_MAX : INT64_MIN);
+}
+*(int64_t *)(d + i) = r;
+}
+}
+
+/*
+ * Unsigned saturating addition with scalar operand.
+ */
+
+void HELPER(sve_uqaddi_b)(void *d, void *a, int32_t b, uint32_t desc)
+{
+intptr_t i, oprsz = simd_oprsz(desc);
+
+for (i = 0; i < oprsz; i += sizeof(uint8_t)) {
+int r = *(uint8_t *)(a + i) + b;
+if (r > UINT8_MAX) {
+r = UINT8_MAX;
+} else if (r < 0) {
+r = 0;
+}
+*(uint8_t *)(d + i) = r;
+}
+}
+
+void HELPER(sve_uqaddi_h)(void *d, void *a, int32_t b, uint32_t desc)
+{
+intptr_t i, oprsz = simd_oprsz(desc);
+
+for (i = 0; i < oprsz; i += sizeof(uint16_t)) {
+int r = *(uint16_t *)(a + i) + b;
+if (r > UINT16_MAX) {
+r = UINT16_MAX;
+} else if (r < 0) {
+r = 0;
+}
+*(uint16_t *)(d + i) = r;
+}
+}
+
+void HELPER(sve_uqaddi_s)(void *d, void *a, int64_t b, uint32_t desc)
+{
+intptr_t i, oprsz = simd_oprsz(desc);
+
+for (i = 0; i < oprsz; i += sizeof(uint32_t)) {
+int64_t r = *(uint32_t *)(a + i) + b;
+if (r > UINT32_MAX) {
+r = UINT32_MAX;
+} else if (r < 0) {
+r = 0;
+}
+*(uint32_t *)(d + i) = r;
+}
+}
+
+void HELPER(sve_uqaddi_d)(void *d, void *a, uint64_t b, uint32_t desc)
+{
+intptr_t i, oprsz = simd_oprsz(desc);
+
+ 

[Qemu-devel] [PATCH v2 28/67] target/arm: Implement SVE Permute - Predicates Group

2018-02-17 Thread Richard Henderson
Signed-off-by: Richard Henderson 
---
 target/arm/helper-sve.h|   6 +
 target/arm/sve_helper.c| 280 +
 target/arm/translate-sve.c | 110 ++
 target/arm/sve.decode  |  18 +++
 4 files changed, 414 insertions(+)

diff --git a/target/arm/helper-sve.h b/target/arm/helper-sve.h
index 0c9aad575e..ff958fcebd 100644
--- a/target/arm/helper-sve.h
+++ b/target/arm/helper-sve.h
@@ -439,6 +439,12 @@ DEF_HELPER_FLAGS_3(sve_uunpk_h, TCG_CALL_NO_RWG, void, 
ptr, ptr, i32)
 DEF_HELPER_FLAGS_3(sve_uunpk_s, TCG_CALL_NO_RWG, void, ptr, ptr, i32)
 DEF_HELPER_FLAGS_3(sve_uunpk_d, TCG_CALL_NO_RWG, void, ptr, ptr, i32)
 
+DEF_HELPER_FLAGS_4(sve_zip_p, TCG_CALL_NO_RWG, void, ptr, ptr, ptr, i32)
+DEF_HELPER_FLAGS_4(sve_uzp_p, TCG_CALL_NO_RWG, void, ptr, ptr, ptr, i32)
+DEF_HELPER_FLAGS_4(sve_trn_p, TCG_CALL_NO_RWG, void, ptr, ptr, ptr, i32)
+DEF_HELPER_FLAGS_3(sve_rev_p, TCG_CALL_NO_RWG, void, ptr, ptr, i32)
+DEF_HELPER_FLAGS_3(sve_punpk_p, TCG_CALL_NO_RWG, void, ptr, ptr, i32)
+
 DEF_HELPER_FLAGS_5(sve_and_, TCG_CALL_NO_RWG, void, ptr, ptr, ptr, ptr, 
i32)
 DEF_HELPER_FLAGS_5(sve_bic_, TCG_CALL_NO_RWG, void, ptr, ptr, ptr, ptr, 
i32)
 DEF_HELPER_FLAGS_5(sve_eor_, TCG_CALL_NO_RWG, void, ptr, ptr, ptr, ptr, 
i32)
diff --git a/target/arm/sve_helper.c b/target/arm/sve_helper.c
index 466a209c1e..c3a2706a16 100644
--- a/target/arm/sve_helper.c
+++ b/target/arm/sve_helper.c
@@ -1664,3 +1664,283 @@ DO_UNPK(sve_uunpk_s, uint32_t, uint16_t, H4, H2)
 DO_UNPK(sve_uunpk_d, uint64_t, uint32_t, , H4)
 
 #undef DO_UNPK
+
+static const uint64_t expand_bit_data[5][2] = {
+{ 0xull, 0xull },
+{ 0x0303030303030303ull, 0x0c0c0c0c0c0c0c0cull },
+{ 0x000f000f000f000full, 0x00f000f000f000f0ull },
+{ 0x00ff00ffull, 0xff00ff00ull },
+{ 0xull, 0xull }
+};
+
+/* Expand units of 2**N bits to units of 2**(N+1) bits,
+   with the higher bits zero.  */
+static uint64_t expand_bits(uint64_t x, int n)
+{
+int i, sh;
+for (i = 4, sh = 16; i >= n; i--, sh >>= 1) {
+x = ((x & expand_bit_data[i][1]) << sh) | (x & expand_bit_data[i][0]);
+}
+return x;
+}
+
+/* Compress units of 2**(N+1) bits to units of 2**N bits.  */
+static uint64_t compress_bits(uint64_t x, int n)
+{
+int i, sh;
+for (i = n, sh = 1 << n; i <= 4; i++, sh <<= 1) {
+x = ((x >> sh) & expand_bit_data[i][1]) | (x & expand_bit_data[i][0]);
+}
+return x;
+}
+
+void HELPER(sve_zip_p)(void *vd, void *vn, void *vm, uint32_t pred_desc)
+{
+intptr_t oprsz = extract32(pred_desc, 0, SIMD_OPRSZ_BITS) + 2;
+int esz = extract32(pred_desc, SIMD_DATA_SHIFT, 2);
+intptr_t high = extract32(pred_desc, SIMD_DATA_SHIFT + 2, 1);
+uint64_t *d = vd;
+intptr_t i;
+
+if (oprsz <= 8) {
+uint64_t nn = *(uint64_t *)vn;
+uint64_t mm = *(uint64_t *)vm;
+int half = 4 * oprsz;
+
+nn = extract64(nn, high * half, half);
+mm = extract64(mm, high * half, half);
+nn = expand_bits(nn, esz);
+mm = expand_bits(mm, esz);
+d[0] = nn + (mm << (1 << esz));
+} else {
+ARMPredicateReg tmp_n, tmp_m;
+
+/* We produce output faster than we consume input.
+   Therefore we must be mindful of possible overlap.  */
+if ((vn - vd) < (uintptr_t)oprsz) {
+vn = memcpy(_n, vn, oprsz);
+}
+if ((vm - vd) < (uintptr_t)oprsz) {
+vm = memcpy(_m, vm, oprsz);
+}
+if (high) {
+high = oprsz >> 1;
+}
+
+if ((high & 3) == 0) {
+uint32_t *n = vn, *m = vm;
+high >>= 2;
+
+for (i = 0; i < DIV_ROUND_UP(oprsz, 8); i++) {
+uint64_t nn = n[H4(high + i)];
+uint64_t mm = m[H4(high + i)];
+
+nn = expand_bits(nn, esz);
+mm = expand_bits(mm, esz);
+d[i] = nn + (mm << (1 << esz));
+}
+} else {
+uint8_t *n = vn, *m = vm;
+uint16_t *d16 = vd;
+
+for (i = 0; i < oprsz / 2; i++) {
+uint16_t nn = n[H1(high + i)];
+uint16_t mm = m[H1(high + i)];
+
+nn = expand_bits(nn, esz);
+mm = expand_bits(mm, esz);
+d16[H2(i)] = nn + (mm << (1 << esz));
+}
+}
+}
+}
+
+void HELPER(sve_uzp_p)(void *vd, void *vn, void *vm, uint32_t pred_desc)
+{
+intptr_t oprsz = extract32(pred_desc, 0, SIMD_OPRSZ_BITS) + 2;
+int esz = extract32(pred_desc, SIMD_DATA_SHIFT, 2);
+int odd = extract32(pred_desc, SIMD_DATA_SHIFT + 2, 1) << esz;
+uint64_t *d = vd, *n = vn, *m = vm;
+uint64_t l, h;
+intptr_t i;
+
+if (oprsz <= 8) {
+l = compress_bits(n[0] >> odd, esz);
+h = compress_bits(m[0] >> odd, esz);
+d[0] = extract64(l + (h << (4 * oprsz)), 

[Qemu-devel] [PATCH v2 21/67] target/arm: Implement SVE floating-point exponential accelerator

2018-02-17 Thread Richard Henderson
Signed-off-by: Richard Henderson 
---
 target/arm/helper-sve.h|  4 +++
 target/arm/sve_helper.c| 81 ++
 target/arm/translate-sve.c | 22 +
 target/arm/sve.decode  |  7 
 4 files changed, 114 insertions(+)

diff --git a/target/arm/helper-sve.h b/target/arm/helper-sve.h
index 5280d375f9..e2925ff8ec 100644
--- a/target/arm/helper-sve.h
+++ b/target/arm/helper-sve.h
@@ -385,6 +385,10 @@ DEF_HELPER_FLAGS_4(sve_adr_p64, TCG_CALL_NO_RWG, void, 
ptr, ptr, ptr, i32)
 DEF_HELPER_FLAGS_4(sve_adr_s32, TCG_CALL_NO_RWG, void, ptr, ptr, ptr, i32)
 DEF_HELPER_FLAGS_4(sve_adr_u32, TCG_CALL_NO_RWG, void, ptr, ptr, ptr, i32)
 
+DEF_HELPER_FLAGS_3(sve_fexpa_h, TCG_CALL_NO_RWG, void, ptr, ptr, i32)
+DEF_HELPER_FLAGS_3(sve_fexpa_s, TCG_CALL_NO_RWG, void, ptr, ptr, i32)
+DEF_HELPER_FLAGS_3(sve_fexpa_d, TCG_CALL_NO_RWG, void, ptr, ptr, i32)
+
 DEF_HELPER_FLAGS_5(sve_and_, TCG_CALL_NO_RWG, void, ptr, ptr, ptr, ptr, 
i32)
 DEF_HELPER_FLAGS_5(sve_bic_, TCG_CALL_NO_RWG, void, ptr, ptr, ptr, ptr, 
i32)
 DEF_HELPER_FLAGS_5(sve_eor_, TCG_CALL_NO_RWG, void, ptr, ptr, ptr, ptr, 
i32)
diff --git a/target/arm/sve_helper.c b/target/arm/sve_helper.c
index a290a58c02..4d42653eef 100644
--- a/target/arm/sve_helper.c
+++ b/target/arm/sve_helper.c
@@ -1101,3 +1101,84 @@ void HELPER(sve_adr_u32)(void *vd, void *vn, void *vm, 
uint32_t desc)
 d[i] = n[i] + ((uint64_t)(uint32_t)m[i] << sh);
 }
 }
+
+void HELPER(sve_fexpa_h)(void *vd, void *vn, uint32_t desc)
+{
+static const uint16_t coeff[] = {
+0x, 0x0016, 0x002d, 0x0045, 0x005d, 0x0075, 0x008e, 0x00a8,
+0x00c2, 0x00dc, 0x00f8, 0x0114, 0x0130, 0x014d, 0x016b, 0x0189,
+0x01a8, 0x01c8, 0x01e8, 0x0209, 0x022b, 0x024e, 0x0271, 0x0295,
+0x02ba, 0x02e0, 0x0306, 0x032e, 0x0356, 0x037f, 0x03a9, 0x03d4,
+};
+intptr_t i, opr_sz = simd_oprsz(desc) / 2;
+uint16_t *d = vd, *n = vn;
+
+for (i = 0; i < opr_sz; i++) {
+uint16_t nn = n[i];
+intptr_t idx = extract32(nn, 0, 5);
+uint16_t exp = extract32(nn, 5, 5);
+d[i] = coeff[idx] | (exp << 10);
+}
+}
+
+void HELPER(sve_fexpa_s)(void *vd, void *vn, uint32_t desc)
+{
+static const uint32_t coeff[] = {
+0x00, 0x0164d2, 0x02cd87, 0x043a29,
+0x05aac3, 0x071f62, 0x08980f, 0x0a14d5,
+0x0b95c2, 0x0d1adf, 0x0ea43a, 0x1031dc,
+0x11c3d3, 0x135a2b, 0x14f4f0, 0x16942d,
+0x1837f0, 0x19e046, 0x1b8d3a, 0x1d3eda,
+0x1ef532, 0x20b051, 0x227043, 0x243516,
+0x25fed7, 0x27cd94, 0x29a15b, 0x2b7a3a,
+0x2d583f, 0x2f3b79, 0x3123f6, 0x3311c4,
+0x3504f3, 0x36fd92, 0x38fbaf, 0x3aff5b,
+0x3d08a4, 0x3f179a, 0x412c4d, 0x4346cd,
+0x45672a, 0x478d75, 0x49b9be, 0x4bec15,
+0x4e248c, 0x506334, 0x52a81e, 0x54f35b,
+0x5744fd, 0x599d16, 0x5bfbb8, 0x5e60f5,
+0x60ccdf, 0x633f89, 0x65b907, 0x68396a,
+0x6ac0c7, 0x6d4f30, 0x6fe4ba, 0x728177,
+0x75257d, 0x77d0df, 0x7a83b3, 0x7d3e0c,
+};
+intptr_t i, opr_sz = simd_oprsz(desc) / 4;
+uint32_t *d = vd, *n = vn;
+
+for (i = 0; i < opr_sz; i++) {
+uint32_t nn = n[i];
+intptr_t idx = extract32(nn, 0, 6);
+uint32_t exp = extract32(nn, 6, 8);
+d[i] = coeff[idx] | (exp << 23);
+}
+}
+
+void HELPER(sve_fexpa_d)(void *vd, void *vn, uint32_t desc)
+{
+static const uint64_t coeff[] = {
+0x0, 0x02C9A3E778061, 0x059B0D3158574, 0x0874518759BC8,
+0x0B5586CF9890F, 0x0E3EC32D3D1A2, 0x11301D0125B51, 0x1429AAEA92DE0,
+0x172B83C7D517B, 0x1A35BEB6FCB75, 0x1D4873168B9AA, 0x2063B88628CD6,
+0x2387A6E756238, 0x26B4565E27CDD, 0x29E9DF51FDEE1, 0x2D285A6E4030B,
+0x306FE0A31B715, 0x33C08B26416FF, 0x371A7373AA9CB, 0x3A7DB34E59FF7,
+0x3DEA64C123422, 0x4160A21F72E2A, 0x44E086061892D, 0x486A2B5C13CD0,
+0x4BFDAD5362A27, 0x4F9B2769D2CA7, 0x5342B569D4F82, 0x56F4736B527DA,
+0x5AB07DD485429, 0x5E76F15AD2148, 0x6247EB03A5585, 0x6623882552225,
+0x6A09E667F3BCD, 0x6DFB23C651A2F, 0x71F75E8EC5F74, 0x75FEB564267C9,
+0x7A11473EB0187, 0x7E2F336CF4E62, 0x82589994CCE13, 0x868D99B4492ED,
+0x8ACE5422AA0DB, 0x8F1AE99157736, 0x93737B0CDC5E5, 0x97D829FDE4E50,
+0x9C49182A3F090, 0xA0C667B5DE565, 0xA5503B23E255D, 0xA9E6B5579FDBF,
+0xAE89F995AD3AD, 0xB33A2B84F15FB, 0xB7F76F2FB5E47, 0xBCC1E904BC1D2,
+0xC199BDD85529C, 0xC67F12E57D14B, 0xCB720DCEF9069, 0xD072D4A07897C,
+0xD5818DCFBA487, 0xDA9E603DB3285, 0xDFC97337B9B5F, 0xE502EE78B3FF6,
+0xEA4AFA2A490DA, 0xEFA1BEE615A27, 0xF50765B6E4540, 0xFA7C1819E90D8,
+};
+intptr_t i, opr_sz = simd_oprsz(desc) / 8;
+uint64_t *d = vd, *n = vn;
+
+for (i = 0; i < opr_sz; i++) {
+uint64_t nn = n[i];
+intptr_t idx = extract32(nn, 0, 6);
+uint64_t exp = extract32(nn, 6, 11);
+d[i] = coeff[idx] | (exp 

[Qemu-devel] [PATCH v2 43/67] target/arm: Implement SVE Floating Point Arithmetic - Unpredicated Group

2018-02-17 Thread Richard Henderson
Signed-off-by: Richard Henderson 
---
 target/arm/helper-sve.h| 14 +++
 target/arm/helper.h| 19 ++
 target/arm/translate-sve.c | 41 
 target/arm/vec_helper.c| 94 ++
 target/arm/Makefile.objs   |  2 +-
 target/arm/sve.decode  | 10 +
 6 files changed, 179 insertions(+), 1 deletion(-)
 create mode 100644 target/arm/vec_helper.c

diff --git a/target/arm/helper-sve.h b/target/arm/helper-sve.h
index 97bfe0f47b..2e76084992 100644
--- a/target/arm/helper-sve.h
+++ b/target/arm/helper-sve.h
@@ -705,3 +705,17 @@ DEF_HELPER_FLAGS_4(sve_umini_b, TCG_CALL_NO_RWG, void, 
ptr, ptr, i64, i32)
 DEF_HELPER_FLAGS_4(sve_umini_h, TCG_CALL_NO_RWG, void, ptr, ptr, i64, i32)
 DEF_HELPER_FLAGS_4(sve_umini_s, TCG_CALL_NO_RWG, void, ptr, ptr, i64, i32)
 DEF_HELPER_FLAGS_4(sve_umini_d, TCG_CALL_NO_RWG, void, ptr, ptr, i64, i32)
+
+DEF_HELPER_FLAGS_5(gvec_recps_h, TCG_CALL_NO_RWG,
+   void, ptr, ptr, ptr, ptr, i32)
+DEF_HELPER_FLAGS_5(gvec_recps_s, TCG_CALL_NO_RWG,
+   void, ptr, ptr, ptr, ptr, i32)
+DEF_HELPER_FLAGS_5(gvec_recps_d, TCG_CALL_NO_RWG,
+   void, ptr, ptr, ptr, ptr, i32)
+
+DEF_HELPER_FLAGS_5(gvec_rsqrts_h, TCG_CALL_NO_RWG,
+   void, ptr, ptr, ptr, ptr, i32)
+DEF_HELPER_FLAGS_5(gvec_rsqrts_s, TCG_CALL_NO_RWG,
+   void, ptr, ptr, ptr, ptr, i32)
+DEF_HELPER_FLAGS_5(gvec_rsqrts_d, TCG_CALL_NO_RWG,
+   void, ptr, ptr, ptr, ptr, i32)
diff --git a/target/arm/helper.h b/target/arm/helper.h
index be3c2fcdc0..f3ce58e276 100644
--- a/target/arm/helper.h
+++ b/target/arm/helper.h
@@ -565,6 +565,25 @@ DEF_HELPER_2(dc_zva, void, env, i64)
 DEF_HELPER_FLAGS_2(neon_pmull_64_lo, TCG_CALL_NO_RWG_SE, i64, i64, i64)
 DEF_HELPER_FLAGS_2(neon_pmull_64_hi, TCG_CALL_NO_RWG_SE, i64, i64, i64)
 
+DEF_HELPER_FLAGS_5(gvec_fadd_h, TCG_CALL_NO_RWG, void, ptr, ptr, ptr, ptr, i32)
+DEF_HELPER_FLAGS_5(gvec_fadd_s, TCG_CALL_NO_RWG, void, ptr, ptr, ptr, ptr, i32)
+DEF_HELPER_FLAGS_5(gvec_fadd_d, TCG_CALL_NO_RWG, void, ptr, ptr, ptr, ptr, i32)
+
+DEF_HELPER_FLAGS_5(gvec_fsub_h, TCG_CALL_NO_RWG, void, ptr, ptr, ptr, ptr, i32)
+DEF_HELPER_FLAGS_5(gvec_fsub_s, TCG_CALL_NO_RWG, void, ptr, ptr, ptr, ptr, i32)
+DEF_HELPER_FLAGS_5(gvec_fsub_d, TCG_CALL_NO_RWG, void, ptr, ptr, ptr, ptr, i32)
+
+DEF_HELPER_FLAGS_5(gvec_fmul_h, TCG_CALL_NO_RWG, void, ptr, ptr, ptr, ptr, i32)
+DEF_HELPER_FLAGS_5(gvec_fmul_s, TCG_CALL_NO_RWG, void, ptr, ptr, ptr, ptr, i32)
+DEF_HELPER_FLAGS_5(gvec_fmul_d, TCG_CALL_NO_RWG, void, ptr, ptr, ptr, ptr, i32)
+
+DEF_HELPER_FLAGS_5(gvec_ftsmul_h, TCG_CALL_NO_RWG,
+   void, ptr, ptr, ptr, ptr, i32)
+DEF_HELPER_FLAGS_5(gvec_ftsmul_s, TCG_CALL_NO_RWG,
+   void, ptr, ptr, ptr, ptr, i32)
+DEF_HELPER_FLAGS_5(gvec_ftsmul_d, TCG_CALL_NO_RWG,
+   void, ptr, ptr, ptr, ptr, i32)
+
 #ifdef TARGET_AARCH64
 #include "helper-a64.h"
 #include "helper-sve.h"
diff --git a/target/arm/translate-sve.c b/target/arm/translate-sve.c
index 72abcb543a..f9a3ad1434 100644
--- a/target/arm/translate-sve.c
+++ b/target/arm/translate-sve.c
@@ -3109,6 +3109,47 @@ DO_ZZI(UMIN, umin)
 
 #undef DO_ZZI
 
+/*
+ *** SVE Floating Point Arithmetic - Unpredicated Group
+ */
+
+static void do_zzz_fp(DisasContext *s, arg_rrr_esz *a,
+  gen_helper_gvec_3_ptr *fn)
+{
+unsigned vsz = vec_full_reg_size(s);
+TCGv_ptr status;
+
+if (fn == NULL) {
+unallocated_encoding(s);
+return;
+}
+status = get_fpstatus_ptr(a->esz == MO_16);
+tcg_gen_gvec_3_ptr(vec_full_reg_offset(s, a->rd),
+   vec_full_reg_offset(s, a->rn),
+   vec_full_reg_offset(s, a->rm),
+   status, vsz, vsz, 0, fn);
+}
+
+
+#define DO_FP3(NAME, name) \
+static void trans_##NAME(DisasContext *s, arg_rrr_esz *a, uint32_t insn) \
+{   \
+static gen_helper_gvec_3_ptr * const fns[4] = { \
+NULL, gen_helper_gvec_##name##_h,   \
+gen_helper_gvec_##name##_s, gen_helper_gvec_##name##_d  \
+};  \
+do_zzz_fp(s, a, fns[a->esz]);   \
+}
+
+DO_FP3(FADD_zzz, fadd)
+DO_FP3(FSUB_zzz, fsub)
+DO_FP3(FMUL_zzz, fmul)
+DO_FP3(FTSMUL, ftsmul)
+DO_FP3(FRECPS, recps)
+DO_FP3(FRSQRTS, rsqrts)
+
+#undef DO_FP3
+
 /*
  *** SVE Memory - 32-bit Gather and Unsized Contiguous Group
  */
diff --git a/target/arm/vec_helper.c b/target/arm/vec_helper.c
new file mode 100644
index 00..ad5c29cdd5
--- /dev/null
+++ b/target/arm/vec_helper.c
@@ -0,0 +1,94 @@
+/*
+ * ARM Shared AdvSIMD / SVE Operations
+ *
+ * Copyright (c) 2018 Linaro
+ *
+ * This library is free software; you can redistribute it and/or
+ * modify it under the terms of the GNU Lesser 

[Qemu-devel] [PATCH v2 17/67] target/arm: Implement SVE Index Generation Group

2018-02-17 Thread Richard Henderson
Signed-off-by: Richard Henderson 
---
 target/arm/helper-sve.h|  5 
 target/arm/sve_helper.c| 40 +++
 target/arm/translate-sve.c | 67 ++
 target/arm/sve.decode  | 14 ++
 4 files changed, 126 insertions(+)

diff --git a/target/arm/helper-sve.h b/target/arm/helper-sve.h
index b31d497f31..2a2dbe98dd 100644
--- a/target/arm/helper-sve.h
+++ b/target/arm/helper-sve.h
@@ -363,6 +363,11 @@ DEF_HELPER_FLAGS_6(sve_mls_s, TCG_CALL_NO_RWG,
 DEF_HELPER_FLAGS_6(sve_mls_d, TCG_CALL_NO_RWG,
void, ptr, ptr, ptr, ptr, ptr, i32)
 
+DEF_HELPER_FLAGS_4(sve_index_b, TCG_CALL_NO_RWG, void, ptr, i32, i32, i32)
+DEF_HELPER_FLAGS_4(sve_index_h, TCG_CALL_NO_RWG, void, ptr, i32, i32, i32)
+DEF_HELPER_FLAGS_4(sve_index_s, TCG_CALL_NO_RWG, void, ptr, i32, i32, i32)
+DEF_HELPER_FLAGS_4(sve_index_d, TCG_CALL_NO_RWG, void, ptr, i64, i64, i32)
+
 DEF_HELPER_FLAGS_5(sve_and_, TCG_CALL_NO_RWG, void, ptr, ptr, ptr, ptr, 
i32)
 DEF_HELPER_FLAGS_5(sve_bic_, TCG_CALL_NO_RWG, void, ptr, ptr, ptr, ptr, 
i32)
 DEF_HELPER_FLAGS_5(sve_eor_, TCG_CALL_NO_RWG, void, ptr, ptr, ptr, ptr, 
i32)
diff --git a/target/arm/sve_helper.c b/target/arm/sve_helper.c
index 4b08a38ce8..950012e70a 100644
--- a/target/arm/sve_helper.c
+++ b/target/arm/sve_helper.c
@@ -991,3 +991,43 @@ DO_ZPZZZ_D(sve_mls_d, uint64_t, DO_MLS)
 #undef DO_MLS
 #undef DO_ZPZZZ
 #undef DO_ZPZZZ_D
+
+void HELPER(sve_index_b)(void *vd, uint32_t start,
+ uint32_t incr, uint32_t desc)
+{
+intptr_t i, opr_sz = simd_oprsz(desc);
+uint8_t *d = vd;
+for (i = 0; i < opr_sz; i += 1) {
+d[H1(i)] = start + i * incr;
+}
+}
+
+void HELPER(sve_index_h)(void *vd, uint32_t start,
+ uint32_t incr, uint32_t desc)
+{
+intptr_t i, opr_sz = simd_oprsz(desc) / 2;
+uint16_t *d = vd;
+for (i = 0; i < opr_sz; i += 1) {
+d[H2(i)] = start + i * incr;
+}
+}
+
+void HELPER(sve_index_s)(void *vd, uint32_t start,
+ uint32_t incr, uint32_t desc)
+{
+intptr_t i, opr_sz = simd_oprsz(desc) / 4;
+uint32_t *d = vd;
+for (i = 0; i < opr_sz; i += 1) {
+d[H4(i)] = start + i * incr;
+}
+}
+
+void HELPER(sve_index_d)(void *vd, uint64_t start,
+ uint64_t incr, uint32_t desc)
+{
+intptr_t i, opr_sz = simd_oprsz(desc) / 8;
+uint64_t *d = vd;
+for (i = 0; i < opr_sz; i += 1) {
+d[i] = start + i * incr;
+}
+}
diff --git a/target/arm/translate-sve.c b/target/arm/translate-sve.c
index 8baec6c674..773f0bfded 100644
--- a/target/arm/translate-sve.c
+++ b/target/arm/translate-sve.c
@@ -675,6 +675,73 @@ DO_ZPZZZ(MLS, mls)
 
 #undef DO_ZPZZZ
 
+/*
+ *** SVE Index Generation Group
+ */
+
+static void do_index(DisasContext *s, int esz, int rd,
+ TCGv_i64 start, TCGv_i64 incr)
+{
+unsigned vsz = vec_full_reg_size(s);
+TCGv_i32 desc = tcg_const_i32(simd_desc(vsz, vsz, 0));
+TCGv_ptr t_zd = tcg_temp_new_ptr();
+
+tcg_gen_addi_ptr(t_zd, cpu_env, vec_full_reg_offset(s, rd));
+if (esz == 3) {
+gen_helper_sve_index_d(t_zd, start, incr, desc);
+} else {
+typedef void index_fn(TCGv_ptr, TCGv_i32, TCGv_i32, TCGv_i32);
+static index_fn * const fns[3] = {
+gen_helper_sve_index_b,
+gen_helper_sve_index_h,
+gen_helper_sve_index_s,
+};
+TCGv_i32 s32 = tcg_temp_new_i32();
+TCGv_i32 i32 = tcg_temp_new_i32();
+
+tcg_gen_extrl_i64_i32(s32, start);
+tcg_gen_extrl_i64_i32(i32, incr);
+fns[esz](t_zd, s32, i32, desc);
+
+tcg_temp_free_i32(s32);
+tcg_temp_free_i32(i32);
+}
+tcg_temp_free_ptr(t_zd);
+tcg_temp_free_i32(desc);
+}
+
+static void trans_INDEX_ii(DisasContext *s, arg_INDEX_ii *a, uint32_t insn)
+{
+TCGv_i64 start = tcg_const_i64(a->imm1);
+TCGv_i64 incr = tcg_const_i64(a->imm2);
+do_index(s, a->esz, a->rd, start, incr);
+tcg_temp_free_i64(start);
+tcg_temp_free_i64(incr);
+}
+
+static void trans_INDEX_ir(DisasContext *s, arg_INDEX_ir *a, uint32_t insn)
+{
+TCGv_i64 start = tcg_const_i64(a->imm);
+TCGv_i64 incr = cpu_reg(s, a->rm);
+do_index(s, a->esz, a->rd, start, incr);
+tcg_temp_free_i64(start);
+}
+
+static void trans_INDEX_ri(DisasContext *s, arg_INDEX_ri *a, uint32_t insn)
+{
+TCGv_i64 start = cpu_reg(s, a->rn);
+TCGv_i64 incr = tcg_const_i64(a->imm);
+do_index(s, a->esz, a->rd, start, incr);
+tcg_temp_free_i64(incr);
+}
+
+static void trans_INDEX_rr(DisasContext *s, arg_INDEX_rr *a, uint32_t insn)
+{
+TCGv_i64 start = cpu_reg(s, a->rn);
+TCGv_i64 incr = cpu_reg(s, a->rm);
+do_index(s, a->esz, a->rd, start, incr);
+}
+
 /*
  *** SVE Predicate Logical Operations Group
  */
diff --git a/target/arm/sve.decode b/target/arm/sve.decode
index b40d7dc9a2..d7b078e92f 100644
--- 

[Qemu-devel] [PATCH v2 19/67] target/arm: Implement SVE Bitwise Shift - Unpredicated Group

2018-02-17 Thread Richard Henderson
Signed-off-by: Richard Henderson 
---
 target/arm/helper-sve.h| 12 +++
 target/arm/sve_helper.c| 30 +
 target/arm/translate-sve.c | 81 ++
 target/arm/sve.decode  | 26 +++
 4 files changed, 149 insertions(+)

diff --git a/target/arm/helper-sve.h b/target/arm/helper-sve.h
index 2a2dbe98dd..00e3cd48bb 100644
--- a/target/arm/helper-sve.h
+++ b/target/arm/helper-sve.h
@@ -368,6 +368,18 @@ DEF_HELPER_FLAGS_4(sve_index_h, TCG_CALL_NO_RWG, void, 
ptr, i32, i32, i32)
 DEF_HELPER_FLAGS_4(sve_index_s, TCG_CALL_NO_RWG, void, ptr, i32, i32, i32)
 DEF_HELPER_FLAGS_4(sve_index_d, TCG_CALL_NO_RWG, void, ptr, i64, i64, i32)
 
+DEF_HELPER_FLAGS_4(sve_asr_zzw_b, TCG_CALL_NO_RWG, void, ptr, ptr, ptr, i32)
+DEF_HELPER_FLAGS_4(sve_asr_zzw_h, TCG_CALL_NO_RWG, void, ptr, ptr, ptr, i32)
+DEF_HELPER_FLAGS_4(sve_asr_zzw_s, TCG_CALL_NO_RWG, void, ptr, ptr, ptr, i32)
+
+DEF_HELPER_FLAGS_4(sve_lsr_zzw_b, TCG_CALL_NO_RWG, void, ptr, ptr, ptr, i32)
+DEF_HELPER_FLAGS_4(sve_lsr_zzw_h, TCG_CALL_NO_RWG, void, ptr, ptr, ptr, i32)
+DEF_HELPER_FLAGS_4(sve_lsr_zzw_s, TCG_CALL_NO_RWG, void, ptr, ptr, ptr, i32)
+
+DEF_HELPER_FLAGS_4(sve_lsl_zzw_b, TCG_CALL_NO_RWG, void, ptr, ptr, ptr, i32)
+DEF_HELPER_FLAGS_4(sve_lsl_zzw_h, TCG_CALL_NO_RWG, void, ptr, ptr, ptr, i32)
+DEF_HELPER_FLAGS_4(sve_lsl_zzw_s, TCG_CALL_NO_RWG, void, ptr, ptr, ptr, i32)
+
 DEF_HELPER_FLAGS_5(sve_and_, TCG_CALL_NO_RWG, void, ptr, ptr, ptr, ptr, 
i32)
 DEF_HELPER_FLAGS_5(sve_bic_, TCG_CALL_NO_RWG, void, ptr, ptr, ptr, ptr, 
i32)
 DEF_HELPER_FLAGS_5(sve_eor_, TCG_CALL_NO_RWG, void, ptr, ptr, ptr, ptr, 
i32)
diff --git a/target/arm/sve_helper.c b/target/arm/sve_helper.c
index 950012e70a..4c6e2713fa 100644
--- a/target/arm/sve_helper.c
+++ b/target/arm/sve_helper.c
@@ -614,6 +614,36 @@ DO_ZPZ(sve_neg_h, uint16_t, H1_2, DO_NEG)
 DO_ZPZ(sve_neg_s, uint32_t, H1_4, DO_NEG)
 DO_ZPZ_D(sve_neg_d, uint64_t, DO_NEG)
 
+/* Three-operand expander, unpredicated, in which the third operand is "wide".
+ */
+#define DO_ZZW(NAME, TYPE, TYPEW, H, OP)   \
+void HELPER(NAME)(void *vd, void *vn, void *vm, uint32_t desc) \
+{  \
+intptr_t i, opr_sz = simd_oprsz(desc); \
+for (i = 0; i < opr_sz; ) {\
+TYPEW mm = *(TYPEW *)(vm + i); \
+do {   \
+TYPE nn = *(TYPE *)(vn + H(i));\
+*(TYPE *)(vd + H(i)) = OP(nn, mm); \
+i += sizeof(TYPE); \
+} while (i & 7);   \
+}  \
+}
+
+DO_ZZW(sve_asr_zzw_b, int8_t, uint64_t, H1, DO_ASR)
+DO_ZZW(sve_lsr_zzw_b, uint8_t, uint64_t, H1, DO_LSR)
+DO_ZZW(sve_lsl_zzw_b, uint8_t, uint64_t, H1, DO_LSL)
+
+DO_ZZW(sve_asr_zzw_h, int16_t, uint64_t, H1_2, DO_ASR)
+DO_ZZW(sve_lsr_zzw_h, uint16_t, uint64_t, H1_2, DO_LSR)
+DO_ZZW(sve_lsl_zzw_h, uint16_t, uint64_t, H1_2, DO_LSL)
+
+DO_ZZW(sve_asr_zzw_s, int32_t, uint64_t, H1_4, DO_ASR)
+DO_ZZW(sve_lsr_zzw_s, uint32_t, uint64_t, H1_4, DO_LSR)
+DO_ZZW(sve_lsl_zzw_s, uint32_t, uint64_t, H1_4, DO_LSL)
+
+#undef DO_ZZW
+
 #undef DO_CLS_B
 #undef DO_CLS_H
 #undef DO_CLZ_B
diff --git a/target/arm/translate-sve.c b/target/arm/translate-sve.c
index 4a38020c8a..43e9f1ad08 100644
--- a/target/arm/translate-sve.c
+++ b/target/arm/translate-sve.c
@@ -130,6 +130,13 @@ static void do_mov_z(DisasContext *s, int rd, int rn)
 do_vector2_z(s, tcg_gen_gvec_mov, 0, rd, rn);
 }
 
+/* Initialize a Zreg with replications of a 64-bit immediate.  */
+static void do_dupi_z(DisasContext *s, int rd, uint64_t word)
+{
+unsigned vsz = vec_full_reg_size(s);
+tcg_gen_gvec_dup64i(vec_full_reg_offset(s, rd), vsz, vsz, word);
+}
+
 /* Invoke a vector expander on two Pregs.  */
 static void do_vector2_p(DisasContext *s, GVecGen2Fn *gvec_fn,
  int esz, int rd, int rn)
@@ -644,6 +651,80 @@ DO_ZPZW(LSL, lsl)
 
 #undef DO_ZPZW
 
+/*
+ *** SVE Bitwise Shift - Unpredicated Group
+ */
+
+static void do_shift_imm(DisasContext *s, arg_rri_esz *a, bool asr,
+ void (*gvec_fn)(unsigned, uint32_t, uint32_t,
+ int64_t, uint32_t, uint32_t))
+{
+unsigned vsz = vec_full_reg_size(s);
+if (a->esz < 0) {
+/* Invalid tsz encoding -- see tszimm_esz. */
+unallocated_encoding(s);
+return;
+}
+/* Shift by element size is architecturally valid.  For
+   arithmetic right-shift, it's the same as by one less.
+   Otherwise it is a zeroing operation.  */
+if (a->imm >= 8 << a->esz) {
+if (asr) {
+a->imm = (8 << a->esz) - 1;
+} else {
+do_dupi_z(s, a->rd, 

[Qemu-devel] [PATCH v2 26/67] target/arm: Implement SVE Permute - Extract Group

2018-02-17 Thread Richard Henderson
Signed-off-by: Richard Henderson 
---
 target/arm/helper-sve.h|  2 ++
 target/arm/sve_helper.c| 81 ++
 target/arm/translate-sve.c | 29 +
 target/arm/sve.decode  |  9 +-
 4 files changed, 120 insertions(+), 1 deletion(-)

diff --git a/target/arm/helper-sve.h b/target/arm/helper-sve.h
index 79493ab647..94f4356ce9 100644
--- a/target/arm/helper-sve.h
+++ b/target/arm/helper-sve.h
@@ -414,6 +414,8 @@ DEF_HELPER_FLAGS_4(sve_cpy_z_h, TCG_CALL_NO_RWG, void, ptr, 
ptr, i64, i32)
 DEF_HELPER_FLAGS_4(sve_cpy_z_s, TCG_CALL_NO_RWG, void, ptr, ptr, i64, i32)
 DEF_HELPER_FLAGS_4(sve_cpy_z_d, TCG_CALL_NO_RWG, void, ptr, ptr, i64, i32)
 
+DEF_HELPER_FLAGS_4(sve_ext, TCG_CALL_NO_RWG, void, ptr, ptr, ptr, i32)
+
 DEF_HELPER_FLAGS_5(sve_and_, TCG_CALL_NO_RWG, void, ptr, ptr, ptr, ptr, 
i32)
 DEF_HELPER_FLAGS_5(sve_bic_, TCG_CALL_NO_RWG, void, ptr, ptr, ptr, ptr, 
i32)
 DEF_HELPER_FLAGS_5(sve_eor_, TCG_CALL_NO_RWG, void, ptr, ptr, ptr, ptr, 
i32)
diff --git a/target/arm/sve_helper.c b/target/arm/sve_helper.c
index 6a95d1ec48..fb3f54300b 100644
--- a/target/arm/sve_helper.c
+++ b/target/arm/sve_helper.c
@@ -1469,3 +1469,84 @@ void HELPER(sve_cpy_z_d)(void *vd, void *vg, uint64_t 
val, uint32_t desc)
 d[i] = (pg[H1(i)] & 1 ? val : 0);
 }
 }
+
+/* Big-endian hosts need to frob the byte indicies.  If the copy
+ * happens to be 8-byte aligned, then no frobbing necessary.
+ */
+static void swap_memmove(void *vd, void *vs, size_t n)
+{
+uintptr_t d = (uintptr_t)vd;
+uintptr_t s = (uintptr_t)vs;
+uintptr_t o = (d | s | n) & 7;
+size_t i;
+
+#ifndef HOST_WORDS_BIGENDIAN
+o = 0;
+#endif
+switch (o) {
+case 0:
+memmove(vd, vs, n);
+break;
+
+case 4:
+if (d < s || d >= s + n) {
+for (i = 0; i < n; i += 4) {
+*(uint32_t *)H1_4(d + i) = *(uint32_t *)H1_4(s + i);
+}
+} else {
+for (i = n; i > 0; ) {
+i -= 4;
+*(uint32_t *)H1_4(d + i) = *(uint32_t *)H1_4(s + i);
+}
+}
+break;
+
+case 2:
+case 6:
+if (d < s || d >= s + n) {
+for (i = 0; i < n; i += 2) {
+*(uint16_t *)H1_2(d + i) = *(uint16_t *)H1_2(s + i);
+}
+} else {
+for (i = n; i > 0; ) {
+i -= 2;
+*(uint16_t *)H1_2(d + i) = *(uint16_t *)H1_2(s + i);
+}
+}
+break;
+
+default:
+if (d < s || d >= s + n) {
+for (i = 0; i < n; i++) {
+*(uint8_t *)H1(d + i) = *(uint8_t *)H1(s + i);
+}
+} else {
+for (i = n; i > 0; ) {
+i -= 1;
+*(uint8_t *)H1(d + i) = *(uint8_t *)H1(s + i);
+}
+}
+break;
+}
+}
+
+void HELPER(sve_ext)(void *vd, void *vn, void *vm, uint32_t desc)
+{
+intptr_t opr_sz = simd_oprsz(desc);
+size_t n_ofs = simd_data(desc);
+size_t n_siz = opr_sz - n_ofs;
+
+if (vd != vm) {
+swap_memmove(vd, vn + n_ofs, n_siz);
+swap_memmove(vd + n_siz, vm, n_ofs);
+} else if (vd != vn) {
+swap_memmove(vd + n_siz, vd, n_ofs);
+swap_memmove(vd, vn + n_ofs, n_siz);
+} else {
+/* vd == vn == vm.  Need temp space.  */
+ARMVectorReg tmp;
+swap_memmove(, vm, n_ofs);
+swap_memmove(vd, vd + n_ofs, n_siz);
+memcpy(vd + n_siz, , n_ofs);
+}
+}
diff --git a/target/arm/translate-sve.c b/target/arm/translate-sve.c
index dd085b084b..07a5eac092 100644
--- a/target/arm/translate-sve.c
+++ b/target/arm/translate-sve.c
@@ -1790,6 +1790,35 @@ static void trans_CPY_z_i(DisasContext *s, arg_CPY_z_i 
*a, uint32_t insn)
 tcg_temp_free_i64(t_imm);
 }
 
+/*
+ *** SVE Permute Extract Group
+ */
+
+static void trans_EXT(DisasContext *s, arg_EXT *a, uint32_t insn)
+{
+unsigned vsz = vec_full_reg_size(s);
+unsigned n_ofs = a->imm >= vsz ? 0 : a->imm;
+unsigned n_siz = vsz - n_ofs;
+unsigned d = vec_full_reg_offset(s, a->rd);
+unsigned n = vec_full_reg_offset(s, a->rn);
+unsigned m = vec_full_reg_offset(s, a->rm);
+
+/* Use host vector move insns if we have appropriate sizes
+   and no unfortunate overlap.  */
+if (m != d
+&& n_ofs == size_for_gvec(n_ofs)
+&& n_siz == size_for_gvec(n_siz)
+&& (d != n || n_siz <= n_ofs)) {
+tcg_gen_gvec_mov(0, d, n + n_ofs, n_siz, n_siz);
+if (n_ofs != 0) {
+tcg_gen_gvec_mov(0, d + n_siz, m, n_ofs, n_ofs);
+}
+return;
+}
+
+tcg_gen_gvec_3_ool(d, n, m, vsz, vsz, n_ofs, gen_helper_sve_ext);
+}
+
 /*
  *** SVE Memory - 32-bit Gather and Unsized Contiguous Group
  */
diff --git a/target/arm/sve.decode b/target/arm/sve.decode
index e6e10a4f84..5e3a9839d4 100644
--- a/target/arm/sve.decode
+++ b/target/arm/sve.decode
@@ -22,8 

[Qemu-devel] [PATCH v2 39/67] target/arm: Implement SVE Predicate Count Group

2018-02-17 Thread Richard Henderson
Signed-off-by: Richard Henderson 
---
 target/arm/helper-sve.h|   2 +
 target/arm/sve_helper.c|  14 ++
 target/arm/translate-sve.c | 116 +
 target/arm/sve.decode  |  27 +++
 4 files changed, 159 insertions(+)

diff --git a/target/arm/helper-sve.h b/target/arm/helper-sve.h
index f0a3ed3414..dd4f8f754d 100644
--- a/target/arm/helper-sve.h
+++ b/target/arm/helper-sve.h
@@ -676,3 +676,5 @@ DEF_HELPER_FLAGS_4(sve_brkbs_m, TCG_CALL_NO_RWG, i32, ptr, 
ptr, ptr, i32)
 
 DEF_HELPER_FLAGS_4(sve_brkn, TCG_CALL_NO_RWG, void, ptr, ptr, ptr, i32)
 DEF_HELPER_FLAGS_4(sve_brkns, TCG_CALL_NO_RWG, i32, ptr, ptr, ptr, i32)
+
+DEF_HELPER_FLAGS_3(sve_cntp, TCG_CALL_NO_RWG, i64, ptr, ptr, i32)
diff --git a/target/arm/sve_helper.c b/target/arm/sve_helper.c
index d6d2220f8b..dd884bdd1c 100644
--- a/target/arm/sve_helper.c
+++ b/target/arm/sve_helper.c
@@ -2702,3 +2702,17 @@ uint32_t HELPER(sve_brkns)(void *vd, void *vn, void *vg, 
uint32_t pred_desc)
 return do_zero(vd, oprsz);
 }
 }
+
+uint64_t HELPER(sve_cntp)(void *vn, void *vg, uint32_t pred_desc)
+{
+intptr_t oprsz = extract32(pred_desc, 0, SIMD_OPRSZ_BITS) + 2;
+intptr_t esz = extract32(pred_desc, SIMD_DATA_SHIFT, 2);
+uint64_t *n = vn, *g = vg, sum = 0, mask = pred_esz_masks[esz];
+intptr_t i;
+
+for (i = 0; i < DIV_ROUND_UP(oprsz, 8); ++i) {
+uint64_t t = n[i] & g[i] & mask;
+sum += ctpop64(t);
+}
+return sum;
+}
diff --git a/target/arm/translate-sve.c b/target/arm/translate-sve.c
index dc95d68867..038800cc86 100644
--- a/target/arm/translate-sve.c
+++ b/target/arm/translate-sve.c
@@ -36,6 +36,8 @@
 typedef void GVecGen2Fn(unsigned, uint32_t, uint32_t, uint32_t, uint32_t);
 typedef void GVecGen2iFn(unsigned, uint32_t, uint32_t,
  int64_t, uint32_t, uint32_t);
+typedef void GVecGen2sFn(unsigned, uint32_t, uint32_t,
+ TCGv_i64, uint32_t, uint32_t);
 typedef void GVecGen3Fn(unsigned, uint32_t, uint32_t,
 uint32_t, uint32_t, uint32_t);
 
@@ -2731,6 +2733,120 @@ void trans_BRKN(DisasContext *s, arg_rpr_s *a, uint32_t 
insn)
 do_brk2(s, a, gen_helper_sve_brkn, gen_helper_sve_brkns);
 }
 
+/*
+ *** SVE Predicate Count Group
+ */
+
+static void do_cntp(DisasContext *s, TCGv_i64 val, int esz, int pn, int pg)
+{
+unsigned psz = pred_full_reg_size(s);
+
+if (psz <= 8) {
+uint64_t psz_mask;
+
+tcg_gen_ld_i64(val, cpu_env, pred_full_reg_offset(s, pn));
+if (pn != pg) {
+TCGv_i64 g = tcg_temp_new_i64();
+tcg_gen_ld_i64(g, cpu_env, pred_full_reg_offset(s, pg));
+tcg_gen_and_i64(val, val, g);
+tcg_temp_free_i64(g);
+}
+
+/* Reduce the pred_esz_masks value simply to reduce the
+   size of the code generated here.  */
+psz_mask = deposit64(0, 0, psz * 8, -1);
+tcg_gen_andi_i64(val, val, pred_esz_masks[esz] & psz_mask);
+
+tcg_gen_ctpop_i64(val, val);
+} else {
+TCGv_ptr t_pn = tcg_temp_new_ptr();
+TCGv_ptr t_pg = tcg_temp_new_ptr();
+unsigned desc;
+TCGv_i32 t_desc;
+
+desc = psz - 2;
+desc = deposit32(desc, SIMD_DATA_SHIFT, 2, esz);
+
+tcg_gen_addi_ptr(t_pn, cpu_env, pred_full_reg_offset(s, pn));
+tcg_gen_addi_ptr(t_pg, cpu_env, pred_full_reg_offset(s, pg));
+t_desc = tcg_const_i32(desc);
+
+gen_helper_sve_cntp(val, t_pn, t_pg, t_desc);
+tcg_temp_free_ptr(t_pn);
+tcg_temp_free_ptr(t_pg);
+tcg_temp_free_i32(t_desc);
+}
+}
+
+static void trans_CNTP(DisasContext *s, arg_CNTP *a, uint32_t insn)
+{
+do_cntp(s, cpu_reg(s, a->rd), a->esz, a->rn, a->pg);
+}
+
+static void trans_INCDECP_r(DisasContext *s, arg_incdec_pred *a,
+uint32_t insn)
+{
+TCGv_i64 reg = cpu_reg(s, a->rd);
+TCGv_i64 val = tcg_temp_new_i64();
+
+do_cntp(s, val, a->esz, a->pg, a->pg);
+if (a->d) {
+tcg_gen_sub_i64(reg, reg, val);
+} else {
+tcg_gen_add_i64(reg, reg, val);
+}
+tcg_temp_free_i64(val);
+}
+
+static void trans_INCDECP_z(DisasContext *s, arg_incdec2_pred *a,
+uint32_t insn)
+{
+unsigned vsz = vec_full_reg_size(s);
+TCGv_i64 val = tcg_temp_new_i64();
+GVecGen2sFn *gvec_fn = a->d ? tcg_gen_gvec_subs : tcg_gen_gvec_adds;
+
+if (a->esz == 0) {
+unallocated_encoding(s);
+return;
+}
+do_cntp(s, val, a->esz, a->pg, a->pg);
+gvec_fn(a->esz, vec_full_reg_offset(s, a->rd),
+vec_full_reg_offset(s, a->rn), val, vsz, vsz);
+}
+
+static void trans_SINCDECP_r_32(DisasContext *s, arg_incdec_pred *a,
+uint32_t insn)
+{
+TCGv_i64 reg = cpu_reg(s, a->rd);
+TCGv_i64 val = tcg_temp_new_i64();
+
+do_cntp(s, val, a->esz, a->pg, a->pg);
+do_sat_addsub_32(reg, val, a->u, 

[Qemu-devel] [PATCH v2 14/67] target/arm: Implement SVE Integer Arithmetic - Unary Predicated Group

2018-02-17 Thread Richard Henderson
Signed-off-by: Richard Henderson 
---
 target/arm/helper-sve.h|  60 +
 target/arm/sve_helper.c| 127 +
 target/arm/translate-sve.c | 111 +++
 target/arm/sve.decode  |  23 
 4 files changed, 321 insertions(+)

diff --git a/target/arm/helper-sve.h b/target/arm/helper-sve.h
index d516580134..11644125d1 100644
--- a/target/arm/helper-sve.h
+++ b/target/arm/helper-sve.h
@@ -285,6 +285,66 @@ DEF_HELPER_FLAGS_4(sve_asrd_h, TCG_CALL_NO_RWG, void, ptr, 
ptr, ptr, i32)
 DEF_HELPER_FLAGS_4(sve_asrd_s, TCG_CALL_NO_RWG, void, ptr, ptr, ptr, i32)
 DEF_HELPER_FLAGS_4(sve_asrd_d, TCG_CALL_NO_RWG, void, ptr, ptr, ptr, i32)
 
+DEF_HELPER_FLAGS_4(sve_cls_b, TCG_CALL_NO_RWG, void, ptr, ptr, ptr, i32)
+DEF_HELPER_FLAGS_4(sve_cls_h, TCG_CALL_NO_RWG, void, ptr, ptr, ptr, i32)
+DEF_HELPER_FLAGS_4(sve_cls_s, TCG_CALL_NO_RWG, void, ptr, ptr, ptr, i32)
+DEF_HELPER_FLAGS_4(sve_cls_d, TCG_CALL_NO_RWG, void, ptr, ptr, ptr, i32)
+
+DEF_HELPER_FLAGS_4(sve_clz_b, TCG_CALL_NO_RWG, void, ptr, ptr, ptr, i32)
+DEF_HELPER_FLAGS_4(sve_clz_h, TCG_CALL_NO_RWG, void, ptr, ptr, ptr, i32)
+DEF_HELPER_FLAGS_4(sve_clz_s, TCG_CALL_NO_RWG, void, ptr, ptr, ptr, i32)
+DEF_HELPER_FLAGS_4(sve_clz_d, TCG_CALL_NO_RWG, void, ptr, ptr, ptr, i32)
+
+DEF_HELPER_FLAGS_4(sve_cnt_zpz_b, TCG_CALL_NO_RWG, void, ptr, ptr, ptr, i32)
+DEF_HELPER_FLAGS_4(sve_cnt_zpz_h, TCG_CALL_NO_RWG, void, ptr, ptr, ptr, i32)
+DEF_HELPER_FLAGS_4(sve_cnt_zpz_s, TCG_CALL_NO_RWG, void, ptr, ptr, ptr, i32)
+DEF_HELPER_FLAGS_4(sve_cnt_zpz_d, TCG_CALL_NO_RWG, void, ptr, ptr, ptr, i32)
+
+DEF_HELPER_FLAGS_4(sve_cnot_b, TCG_CALL_NO_RWG, void, ptr, ptr, ptr, i32)
+DEF_HELPER_FLAGS_4(sve_cnot_h, TCG_CALL_NO_RWG, void, ptr, ptr, ptr, i32)
+DEF_HELPER_FLAGS_4(sve_cnot_s, TCG_CALL_NO_RWG, void, ptr, ptr, ptr, i32)
+DEF_HELPER_FLAGS_4(sve_cnot_d, TCG_CALL_NO_RWG, void, ptr, ptr, ptr, i32)
+
+DEF_HELPER_FLAGS_4(sve_fabs_h, TCG_CALL_NO_RWG, void, ptr, ptr, ptr, i32)
+DEF_HELPER_FLAGS_4(sve_fabs_s, TCG_CALL_NO_RWG, void, ptr, ptr, ptr, i32)
+DEF_HELPER_FLAGS_4(sve_fabs_d, TCG_CALL_NO_RWG, void, ptr, ptr, ptr, i32)
+
+DEF_HELPER_FLAGS_4(sve_fneg_h, TCG_CALL_NO_RWG, void, ptr, ptr, ptr, i32)
+DEF_HELPER_FLAGS_4(sve_fneg_s, TCG_CALL_NO_RWG, void, ptr, ptr, ptr, i32)
+DEF_HELPER_FLAGS_4(sve_fneg_d, TCG_CALL_NO_RWG, void, ptr, ptr, ptr, i32)
+
+DEF_HELPER_FLAGS_4(sve_not_zpz_b, TCG_CALL_NO_RWG, void, ptr, ptr, ptr, i32)
+DEF_HELPER_FLAGS_4(sve_not_zpz_h, TCG_CALL_NO_RWG, void, ptr, ptr, ptr, i32)
+DEF_HELPER_FLAGS_4(sve_not_zpz_s, TCG_CALL_NO_RWG, void, ptr, ptr, ptr, i32)
+DEF_HELPER_FLAGS_4(sve_not_zpz_d, TCG_CALL_NO_RWG, void, ptr, ptr, ptr, i32)
+
+DEF_HELPER_FLAGS_4(sve_sxtb_h, TCG_CALL_NO_RWG, void, ptr, ptr, ptr, i32)
+DEF_HELPER_FLAGS_4(sve_sxtb_s, TCG_CALL_NO_RWG, void, ptr, ptr, ptr, i32)
+DEF_HELPER_FLAGS_4(sve_sxtb_d, TCG_CALL_NO_RWG, void, ptr, ptr, ptr, i32)
+
+DEF_HELPER_FLAGS_4(sve_uxtb_h, TCG_CALL_NO_RWG, void, ptr, ptr, ptr, i32)
+DEF_HELPER_FLAGS_4(sve_uxtb_s, TCG_CALL_NO_RWG, void, ptr, ptr, ptr, i32)
+DEF_HELPER_FLAGS_4(sve_uxtb_d, TCG_CALL_NO_RWG, void, ptr, ptr, ptr, i32)
+
+DEF_HELPER_FLAGS_4(sve_sxth_s, TCG_CALL_NO_RWG, void, ptr, ptr, ptr, i32)
+DEF_HELPER_FLAGS_4(sve_sxth_d, TCG_CALL_NO_RWG, void, ptr, ptr, ptr, i32)
+
+DEF_HELPER_FLAGS_4(sve_uxth_s, TCG_CALL_NO_RWG, void, ptr, ptr, ptr, i32)
+DEF_HELPER_FLAGS_4(sve_uxth_d, TCG_CALL_NO_RWG, void, ptr, ptr, ptr, i32)
+
+DEF_HELPER_FLAGS_4(sve_sxtw_d, TCG_CALL_NO_RWG, void, ptr, ptr, ptr, i32)
+DEF_HELPER_FLAGS_4(sve_uxtw_d, TCG_CALL_NO_RWG, void, ptr, ptr, ptr, i32)
+
+DEF_HELPER_FLAGS_4(sve_abs_b, TCG_CALL_NO_RWG, void, ptr, ptr, ptr, i32)
+DEF_HELPER_FLAGS_4(sve_abs_h, TCG_CALL_NO_RWG, void, ptr, ptr, ptr, i32)
+DEF_HELPER_FLAGS_4(sve_abs_s, TCG_CALL_NO_RWG, void, ptr, ptr, ptr, i32)
+DEF_HELPER_FLAGS_4(sve_abs_d, TCG_CALL_NO_RWG, void, ptr, ptr, ptr, i32)
+
+DEF_HELPER_FLAGS_4(sve_neg_b, TCG_CALL_NO_RWG, void, ptr, ptr, ptr, i32)
+DEF_HELPER_FLAGS_4(sve_neg_h, TCG_CALL_NO_RWG, void, ptr, ptr, ptr, i32)
+DEF_HELPER_FLAGS_4(sve_neg_s, TCG_CALL_NO_RWG, void, ptr, ptr, ptr, i32)
+DEF_HELPER_FLAGS_4(sve_neg_d, TCG_CALL_NO_RWG, void, ptr, ptr, ptr, i32)
+
 DEF_HELPER_FLAGS_5(sve_and_, TCG_CALL_NO_RWG, void, ptr, ptr, ptr, ptr, 
i32)
 DEF_HELPER_FLAGS_5(sve_bic_, TCG_CALL_NO_RWG, void, ptr, ptr, ptr, ptr, 
i32)
 DEF_HELPER_FLAGS_5(sve_eor_, TCG_CALL_NO_RWG, void, ptr, ptr, ptr, ptr, 
i32)
diff --git a/target/arm/sve_helper.c b/target/arm/sve_helper.c
index 3054b3cc99..e11823a727 100644
--- a/target/arm/sve_helper.c
+++ b/target/arm/sve_helper.c
@@ -499,6 +499,133 @@ DO_ZPZW(sve_lsl_zpzw_s, uint32_t, uint64_t, H1_4, DO_LSL)
 
 #undef DO_ZPZW
 
+/* Fully general two-operand expander, controlled by a predicate.
+ */
+#define DO_ZPZ(NAME, TYPE, H, OP)   \
+void HELPER(NAME)(void *vd, void *vn, void *vg, uint32_t desc)  \
+{

  1   2   >