This patch series extends the code in arch/powerpc/lib/sstep.c so that it handles almost all load and store instructions -- all except the atomic memory operations (lwat, stwat, etc.). It also makes sure that we use the largest possible aligned accesses to access memory and that we don't access the CPU FP/VMX/VSX registers when they don't contain user data.
With this, it should be possible to replace the body of the alignment interrupt handler with a call to emulate_step() or something quite similar. Paul. arch/powerpc/include/asm/sstep.h | 73 +- arch/powerpc/lib/Makefile | 2 +- arch/powerpc/lib/ldstfp.S | 305 ++----- arch/powerpc/lib/quad.S | 62 ++ arch/powerpc/lib/sstep.c | 1773 +++++++++++++++++++++++++++++--------- 5 files changed, 1564 insertions(+), 651 deletions(-)